• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

Interfaces

June 26, 2025

Could Lasers Made From Olive Oil Be The Next-Gen Freshness Detector or Use-By Label?

Imagine scanning a tuna steak in your fridge and suddenly a tiny laser pulse beams an expiration date or, surprise, tells you it’s not really wild-caught.

That’s no longer a sci‑fi: new research from a group of academic researchers from the Jožef Stefan Institute in Slovenia and the Aristotle University of Thessaloniki in Greece published in Advanced Optical Materials discusses how they were able to create edible microlasers crafted entirely from food-grade ingredients, essentially turning food in a tupperware container or on your dinner plate into a data-rich interface with the potential to relay information about freshness, provenance, even safety.

So, how does it work? Researchers created tiny edible lasers by using food-grade materials like olive oil, coconut oil, and sugar-based droplets combined with natural colorants such as chlorophyll or curcumin. The droplets act as tiny optical cavities that trap and amplify light using a principle called ‘whispering gallery mode resonance‘. When excited by an external light source, they emit a laser-like signal. Because the lasing behavior is sensitive to environmental factors like temperature, pH, and chemical composition, these microlasers can be used as sensors that can be embedded directly in food to help detect spoilage, confirm authenticity, or monitor freshness. And, according to the team, this happens without adding anything inedible to the product.

The paper explores different applications, such as edible barcodes, applied onto the food itself and not on packaging. Another idea is food with built-in freshness sensors, salad kits that glow a warning when the pH level shifts, or olive oil bottles that hold internal glow-signatures to confirm authenticity.

This isn’t the first time we’ve heard of technology for freshness, authenticity or changing chemistry built directly into the food itself. A couple of years ago, a company called Index Biosystems developed a form of invisible barcode called a BioTag, which is created by mixing baker’s yeast in extremely small amounts with water, then spraying or misting it onto products such as wheat. BioTags can later be reach using molecular detection techniques such as PCR and DNA sequencing.

The BioTag is a cool concept, but this new breakthrough from the Mediterranean scientists seems like something that – if it ultimately is commercialized – could be much more approachable for the end-user, who doesn’t have access to tools for things like DNA sequence detection (that’s if you can lasers shooting from your food as ‘approachable’).

With the debate about use-by date labels raging after after California became the first state to create a new approach in years (causing numerous other states to consider following suit), new technology like this shows us that someday our food might actually be able to tell us, via lasers, itself whether it is still good to eat.

With the debate over use-by date labels heating up – especially after California became the first state in years to introduce a new approach, prompting several other states to consider following suit – this kind of technology is a sign that someday our food might be able to tell us directly whether it’s still good to eat.

August 9, 2021

Make Beats at the Breakfast Table with Reese’s Puffs AR Cereal Box Drum Machine

After seeing Mark Ronson’s “Watch the Sound” series on music, my 10-year-old son is now very into drum machines and beats. And while he would love a vintage Roland TR 808 to kick off his burgeoning music career, I think instead I’ll get him a box of Reese’s Puffs cereal.

Not that I think a bowl of Reese’s Puffs is the breakfast of champions, but a new promotional box for the cereal out now features an augmented reality drum machine on the back. I received a press release about the new cereal box beatmaker this morning. Usually when I get these types of emails, I immediately toss them. But as I looked at the information, it actually seemed like a pretty cool use of technology, so I went out and bought a box this morning (with apologies to my wife, who does not yet know there is a giant box of sugar cereal jammed into our pantry).

Here’s how it works. On the back of the box is a diagram of the RP-FX drum pad. You scan a special QR code with your mobile phone and it takes you to a special Reese’s web app that accesses your phone’s camera. Place cereal puffs wherever you like on the drum pad spaces and then hover your camera phone over the box. Using AR, the app “reads” where you placed your puffs and generates a beat accordingly. Move the puffs around and the beat changes.

It’s definitely not high fidelity or Pro Tools quality production, but it actually works surprisingly well. Once you have your beat just as you like, you can use the app to record it, so you can share it with friends or use it as the basis of your next club banger.

You can hear the one I whipped up this morning here (or, you know, wait a few months and it’ll be all over the radio).

As noted earlier, I typically shy away from covering promotional stunts like this. But as a parent and a fan of both music and technology, this promotion is actually worth, well, promoting. Besides, using a cereal box to build your own beat sure beats digging for a cheap plastic toy at the bottom of one.

April 15, 2021

Voiceitt’s App Aims to Make Those with Speech Impairments Able to Use Voice Control

It’s easy to joke about digital assistants like Alexa not understanding us. But that’s because most of us take for granted that the words we speak are easily understood. For many people with non-standard speech patterns, this isn’t the case. People with cerebral palsy, stroke victims or people with lifelong speech impairments may speak in a manner that digital assistants, or even other humans can’t readily understand.

Tel Aviv, Israel startup Voiceitt is working to help fix that with its mobile app that works as an individualized electronic translator. Once installed on a phone or tablet the user trains the app by speaking different words. Once Voiceitt has learned those words, the app can then be used to “speak” on behalf of the user. The technology is voice agnostic, so it can work in whatever language the app is delivered in.

You can see it in action in this video, where the user has trained the Voiceitt app to understand how he pronounces “burger” and how that means he would like a burger served in a particular way (with cheese, etc.). When it comes time to order, he says “burger” and the phone’s loudspeaker translates the order in a way that is easily understandable by the server.

Voiceitt also ties in directly with Amazon’s voice assistant, Alexa. (Voiceitt is funded in part by the Alexa Fund.) In this way, Voiceitt skips the intermediary step of saying the translation out loud on the phone. Instead when a command is spoken (“Turn on the light”) Voiceitt’s app talks directly to Alexa through the API to execute the command. Turning on a light may not sound like much, but without voice control, people with cerebral palsy, for instance, are reliant on caregivers to come in and do basic tasks like turning on a light or adjusting the volume on the TV or radio.

For our purposes here at The Spoon, this also means that a person with non-standard speech could also operate connected appliances like microwaves or water faucets, or order food and groceries for home delivery.

As noted, right now Voiceitt requires each user to train the app individually. It can’t extrapolate to words beyond the trained vocabulary spoken in that particular manner. But as more people use the app, Voiceitt’s database of speech patterns grows. Over time, Voiceitt’s artificial intelligence will process its library of data, recognize more patterns to become more fluent and more of a universal translator.

As that happens, Voiceitt could then be installed on the business side of more locations. For instance at a restaurant’s drive-thru, or voice-controlled ordering kiosk. With the software built into the kiosk, users wouldn’t need the middle-man of the tablet or phone, the kiosk would translate.

Voiceitt has raised $16 million in funding so far including non-equity money from governments and other funds interested in bringing more equity to technology. The Voiceitt app will be available here in the U.S. at the end of May, with a subscription of $200 a year, but Voiceitt says it is working to partner with relevant agencies to make it more accessible to people who aren’t able to afford that cost.

March 30, 2021

Piestro Adds Pay-With-Your Face Tech and Cubbies for Pickup

Robotic pizza vending startup Piestro announced today that it has partnered with PopID to integrate pay-with-your face technology into Piestro’s machines.

The integration of PopID’s technology will provide Piestro customers a new contactless ordering and payment system. In addition to paying with traditional credit/debit cards, users can create a PopID account to enable payment via facial recognition. Once that set up is complete, users can choose PopID as a payment method on Piestro’s app or on the machine itself. Orders placed ahead of time via the app can be retrieved via the same facial recognition. This pay-with-your-face option will be extended to Piestro’s white label, co-branded machines as well.

That ability to order ahead and pick up your pizza is also a new bit of functionality for the Piestro device. I spoke with Piestro CEO Massimo Noja De Marco by phone last week, who said that nine automat-style cubbies will be built into Piestro machines. This means you’ll be able to order your pizza ahead of time and have it held in a cubby that you unlock with your phone (or face).

That Piestro and PopID are working together isn’t that much of a surprise. PopID is part of the Cali Group of companies, which also includes Kitchen United, which De Marco founded and was Chief Concept Officer at. On a more existential level, in a post-COVID world, vending machine companies are looking to implement more contactless methods of interaction and reduce the number of physical touchpoints. As a result, other vending machine startups that may have been wary about facial recognition over privacy concerns could be more amenable to the technology now.

Piestro is certainly at the vanguard of a number of different technology trends. In addition to being a fully autonomous, robotic pizza restaurant and adopting a facial payment system, Piestro is also working with Kiwibot to allow delivery robots to pick up and deliver orders from its machine, and is embarking on its second round of equity crowdfunding.

We have to wait for all of this high-tech (and pizza) goodness, however. The first Piestros won’t roll out until early next year.

November 20, 2020

Kea Raises $10M to Bring its AI-Based Phone System to More Restaurants

Kea, which makes an AI-powered virtual phone assistant for restaurants, has raised a $10 million Series A round of funding. TechCrunch was first to report on the round, which was led by Marbruck, with Streamlined Ventures, Xfund, Heartland Ventures, DEEPCORE, Barrel Ventures and AVG Funds, as well as other angel investors participating. This brings the total amount raised by Kea to $17 million.

Basically, Kea is building an automated way for restaurants to answer the phone. The natural language processing software can hold a “conversation” with a customer to take and process their order. You can hear a demo of how it works on the Kea website.

As TechCrunch points out, many restaurants are understaffed and don’t always have a dedicated person to work the phones and take orders. During this time of off-premises eating, not answering the phone can translate into a lot of lost business. Plus, ordering directly from the restaurant instead of through a third-party like DoorDash or Uber Eats helps the restaurant avoid the sky-high commissions those services charge.

Kea is among a wave of natural language customer interaction systems coming to market. Google famously made news a couple years back with its Duplex AI-powered voice assistant for consumers to make automated restaurant reservations that sounded almost too human. Google also developed the CallJoy virtual phone assistant for small business owners. Clinc’s technology brings natural language conversations to the drive-thru, while McDonald’s acquired Apprente last year to add more voice capabilities to its drive-thru.

Kea told TechCrunch that its service is currently live in more than 250 restaurants including Papa John’s. With its new cash, the company is looking to be in 1,000 restaurants in 37 states next year.

November 11, 2020

Amazon Alexa Getting Better at Guessing Follow Up Requests

One big area where virtual assistants like Amazon Alexa and Google Assistant fall short of real human assistants is their inability to contextualize and anticipate what you’ll want next.

Currently, requests made to virtual assistants are often siloed, and go something like this:

“Alexa, how long should I steep tea?”

Alexa answers, with something like “Five minutes,” and then:

“Alexa, set a timer for five minutes.”

In a corporate blog post today (hat tip Geekwire), Amazon announced that Alexa is now getting better at bundling those types of requests together. Amazon refers to this as figuring out your “latent goal,” and actually provides tea steeping as an example. Asking Alexa how long to steep tea could have Alexa guess that your latent goal is to make tea. This, in turn would trigger an immediate and automatic follow up response from Alexa like “Would you like me to set a five minute timer?”

While this seems straightforward, as with so many AI-related tasks, understanding what people want isn’t exactly easy. From Amazon’s blog post:

The first step is to decide whether to anticipate a latent goal at all. Our early experiments showed that not all dialogue contexts are well suited to latent-goal discovery. When a customer asked for “recipes for chicken”, for instance, one of our initial prototypes would incorrectly follow up by asking, “Do you want me to play chicken sounds?”

Beyond tea, it’s not hard to think of how identifying latent goals could be useful in a smart kitchen. In the case of asking for chicken recipes, Alexa could follow up with offers to pre-heat an oven, or, more relevant to Amazon, offer to order you the necessary groceries for delivery that day (preferably from an Amazon grocery store).

Amazon says this latent goal capability is available now in English in the U.S. And while it doesn’t require any additional work from developers to activate, they can make their skills more discoverable with the Name-Free Interaction Tool Kit.

FWIW, I tried asking Alexa the tea steeping question, and it did not follow up with a timer suggestion. So its latent goals capabilities seem to still be, well, latent.

September 29, 2020

Amazon Launches Palm-based Contactless Payment Method

You have to, errrr, hand it to Amazon. The e-commerce giant today announced Amazon One, a new contactless payment method that relies on scanning your palm as you enter its store.

Amazon One is now an entry option at two of Amazon’s Go stores in Seattle (the 7th & Blanchard and South Lake Union stores, if you’re in the Emerald City). To use the new system, you insert your credit card into the terminal and hover your palm over the device. The terminal scans your palm print and from that point on, you just need to hold your hand over the One terminal upon entering the store. After that, the Go technology kicks in and automatically keeps track of and charges you for what you take from the store.

You do not need an Amazon account to use Amazon One, just a mobile phone and a credit card. But you can tie your Amazon account to One, should you choose.

In addition to its own physical stores, which include Go convenience stores, Go Grocery stores, Fresh grocery stores, Prime stores and more, Amazon envisions One being used by other retailers. From the blog post announcing the technology:

In most retail environments, Amazon One could become an alternate payment or loyalty card option with a device at the checkout counter next to a traditional point of sale system. Or, for entering a location like a stadium or badging into work, Amazon One could be part of an existing entry point to make accessing the location quicker and easier.

We’ve heard rumblings about some form of pay-with-your palm coming from Amazon for awhile now, so today’s announcement isn’t a surprise. It’s also not a surprise given Amazon’s devotion to speed and efficiency. Scanning your phone to enter a Go store may be easy, but waving your hand over a device is much easier and faster. This, in turn, could entice you to choose an Amazon store over the competition more often.

Amazon One is also coming out during a global pandemic and at a time when retailers are looking for more contactless payment methods. Amazon also licenses out its cashierless Go technology, and combining the two could be an attractive contactless option for retailers

Of course, given Amazon’s increasing dominance in not only retail but many other facets of our everyday lives, people may be reluctant to hand over their biometric data like a palm print. In its One FAQ, Amazon said it chose palm prints because they are more private, and that you can delete your data from the service after signing up.

I don’t need to be a palm reader to see that One will probably play an increasingly important part of Amazon’s physical retail experience, and that we could see it in a lot of other stores in the coming years.

September 16, 2020

Apple App Clips is Out Today, Right in the Middle of the Contactless HeyDay

Apple’s iOS 14 drops today, which means a bunch of new features are coming to your iPhone. Among the batch of goodies to be found in the update is the release of App Clips, which could help accelerate the adoption of mobile contactless payments across the food retail and restaurant space.

We covered App Clips back when the service was announced at Apple’s WWDC event. In a nutshell, App Clips allows you to pull down just a portion of a native mobile app to give you its basic functionality, without needing to download and set up the full app.

Let’s say, for example, you are at a Starbucks but don’t have the Starbucks app installed on your iPhone. Normally, you’d have to download the full app, set up an account and then enter a payment method before you could even start your order. Through App Clips, you can grab just enough of the Starbucks app to order and pay. No need to download the full app, no need for an account and since it uses Apple Pay, no need to enter credit card information.

The big limitation right now is that a cafe, restaurant or retailer has to be participating the program and developed an App Clip. But if/when they do, App Clips can be opened by scanning QR codes, tapping NFC tags, or opening links via Apple services like Messages or Safari. For a full run down of how App Clips works, check out this post from 9 to 5 Mac.

App Clips is definitely arriving at the right time. The COVID-19 pandemic’s sustained presence here in the U.S. is pushing cafes, restaurants and grocery stores towards more contactless payment systems in an attempt to reduce human-to-human contact.

There are a ton of companies bringing contactless payment tech to market. Order for Me, PayJunction and Bbot are among the many startups building contactless payment systems for restaurants. Kroger launched a contactless payment pilot at its QFC store in Seattle last month. And there are a number of other companies doing contactless 2.0, basically through holograms, voice control and pay with your face.

The difference with App Clips, though, is Apple’s massive iPhone installed base. If this catches on, and given it’s utility (no downloading full apps just to get a cup of coffee!), that seems very possible, it could spur vast numbers of people to switch to contactless mobile payments. We just need to see if Apple has the right, well, touch.

September 11, 2020

“Alexa, Look Into My Eyes”: New Prototype Combines Human Gaze with Voice Control to Help You Cook

There’s no doubt that voice control interfaces like Alexa and Google have had a huge impact on the way we search for recipes, access our appliances and add things to our grocery lists.

But what if that voice assistant had a contextual understanding of where you where looking when you are cooking the evening meal?

That’s the big idea behind a new prototype from Synapse, a division of Cambridge Consultants. The new Hobgoblin technology concept utilizes machine vision to gather information about where a person is looking when issuing a voice command and applies that information to the cooking experience.

From the project page:

We have been exploring the use of computer-vision based sensing as context, and for this cooktop demonstration we augmented the VUI using gaze tracking to make what feels like a magical interaction. The cooktop infers which burner is being addressed in a voice command by using its camera tracking to know which burner you’re looking at. This way, when the system detects a person standing in front of it looking at a burner, commands can omit the burner designation, e.g. “turn that burner on,” or simply saying a level like “medium high.”

In the concept video, a user is cooking and says “Alexa, turn up the heat.” Using a camera that is built into the cooktop, Alexa is able to infer that the user is cooking because they are looking at the cooktop.

There’s no doubt that the ability to combine contextual understanding of what is happening in a room to power commands given to digital assistants like Alexa could create much more powerful potential “smart kitchen” user scenarios. One could easily imagine combining other information to create more contextually relevant experiences, including facial recognition to, for example, apply a personalized knowledge base that understands a person’s cooking capabilities or their favorite recipes.

You can see the Hobgoblin demo in action below:

Hobgoblin Smart Appliance Interface | This New User-Interface Tech Isn't Just for the Kitchen

August 10, 2020

As Menus Move to Mobile Phones, Research Shows AR Could Drive More Sales

Among the countless ways COVID is altering the meal journey is the humble menu. Gone are the germy, reusable laminated menus of the past, and while single-use paper menus are a cheap stopgap, the whole experience will move to our mobile phones.

There’s a problem with ordering through mobile menus, though: they aren’t very enticing. Unless you’re familiar with the restaurant you’re ordering from, you’re scrolling and swiping through a lengthy list of tiny 2D thumbnail images to find what you want.

But new research out of Oxford University shows that augmented reality (AR) could be the way to create appetizing menus on your mobile phone. Oxford’s study, conducted in Oxford, England last year in partnership with the AR company Qreal, a subsidiary of The Glimpse Group, gave some participants traditional menus and others AR-capable menus that presented the virtual food as it would look right in front of them on the table.

Highlights from the study, which were emailed to The Spoon, found that “Participants were significantly more likely to order a dessert if they viewed options in the AR menu (41.2%) versus the control menu (18%).” This was across age groups and sexes, as well as across familiarity with AR, so it wasn’t just tech-savvy folk indulging in a shiny new toy.

Not only that, but participants in the study using the AR menu also “spent significantly more on dessert than those in the control condition, $2.93 versus $1.38 (increase of 112%)”

As Mike Cadoux, General Manager of Qreal, summed it up with me over the phone last week, the addition of AR plays into the old adage that you “eat with your eyes first.”

“It’s like a test drive for a car,” said Cadoux, “Same way when you buy food, you want to think about what it’s like to eat it.”

If the results of this study had been released even six months ago, it probably would have been viewed as more of an interesting idea. A nice-to-have kind of thing, but definitely a can kicked down the road in favor of something more pressing.

The coronavirus, however, could accelerate AR’s adoption in the restaurant industry. First, as noted, even if you can (legally and emotionally) to sit and dine in a restaurant, the menu is moving towards mobile, so restaurants need to rethink their digital strategy and how they present their food to customers to begin with.

But more pressing is the fact that the restaurant business was already moving towards off-premises eating before the pandemic and now relies on delivery and takeout to stay alive. This, in turn means that more people will be selecting their meals from the comfort of their couch via mobile phone.

“Instead of a thumbnail of a picture,” Cadoux said, “You can view it in 3D and move it to an AR experience.” AR gives customers a better sense of what the food will look like, from all angles, when it’s on their own plates on and tables.

In addition to restaurants, third-party delivery services, with their marketplaces and massive audiences, should also be looking closely at providing an AR option.

There are the economics of a shift to an AR menu for any restaurant of delivery service to consider. But thanks to Apple and Google, AR technology is easier than ever to implement. And while the Oxford study doesn’t prove outright that implementing AR menus will guarantee increased sales, the study is a nice data point that seems to indicate it’s worth at least experimenting with it.

June 15, 2020

Bevi Will Socially Distance Its Smart Water Coolers With Touchless Tech

As restaurants reopen and (some) employees go back to the office, ensuring sanitary, socially distanced public spaces is a major topic of discussion, and contactless is fast becoming a requirement for everything from restaurant menus to grocery deliveries to lunch.

Your office water cooler can join that list now, too. Today, Bevi, a tech company that makes smart water coolers for office and commercial spaces, announced a new touchless dispensing feature meant to make it machines feel more sanitized and socially distanced to users.

The Bevi machine dispenses both still and sparkling water in a variety of flavors. The system involves an internet-connected dispenser that hooks up to a tap water source. Up to now, users could set flavors, carbonation levels, and other preferences using a touchscreen built into the machine. But come July 13, both new and existing Bevi machines will offer touchless dispensing that utilizes an individual’s mobile phone, according to a press release sent to The Spoon.

Come July 13, Bevi will send an on-screen animation to all its machines that includes instructions on how to use touchless dispensing. To enable the animation, companies just have to run a simple software update. From there, users can scan a QR code, which will replicate Bevi’s dispensing menu on their own personal screen. The same options for drink customization (carbonation levels, flavor, etc.) will appear on the user’s phone just as they would have on the machine’s touchscreen.

On the surface, the update seems a small one, but actually, these micro innovations from the tech world play an important role in making the world, or at least your office or local restaurant, a more sanitary place. While the scale of germophobism varies from one individual to the next, the pandemic has definitely called into question our use of these screens in public settings.

Various efforts are in place to address those concerns. Restaurants across the world are being urged to adopt contactless menus. My colleague Chris Albrecht makes a good argument for gesture control on kiosks and smart dispensers. Others are releasing facial recognition technology on their machines, so that a user need only have their face scanned to access the customer profiles and past orders. 

But facial recognition systems are expensive and come with a double side of privacy concerns. In lots of cases, it may be that a simple QR code is more feasible for a business to implement, especially if it’s for something simple like dispensing a lime-flavored water.

That seems to be Bevi’s thinking behind its new feature update. Doubtless we’ll see many other device-makers rolling out their own touchless functionality in the near future.

June 1, 2020

Now is the Time to Add Gesture Control to Self-Serve Kiosks

Self-serve kiosks could be one bit of technology used to help restaurants re-open. Switching to kiosks can help eliminate ordering from a worker behind a counter and thereby reducing human-to-human contact.

But the problem with a lot of kiosks is that they still require people to use a touchscreen when it comes to browsing a menu and ordering food. Those touchscreens are, well, touched by a lot of hands throughout the day. This will mean that restaurants will need to not only clean the screens on a regular basis, but make sure those sanitation efforts are on display so customers know the kiosks are clean.

One feature that could help mitigate any concerns over using public kiosks is gesture tech — that is, the ability to wave and swipe your hands in front of a device without actually touching it to activate it.

More than just waving your hand under a faucet to get it to run, gesture controls have been around for a long time. Check out this demo of gesture control via Kinect on the Microsoft Xbox from 2014. It shows how you can scroll and select on-screen objects with your hand while never actually touching the screen. My recollection of using the system back then was that it was pretty clunky, but advancements in computing power, computer vision and even AI have assuredly resolved those issues.

You can start to see some of this contactless tech with PopID, which makes kiosks that let you pay with your face. Obviously, as with any facial recognition, there are privacy concerns. But, rightly or wrongly, those more abstract concerns could get set aside because people who don’t want to physically interact with a public machine and insert their credit card.

Obviously, an easier solution would be to skip the kiosk altogether and just have people use their phone to order. In that way, the only object they touch is their own. That is true, but not everybody has a smartphone. Also, while people may download apps for big companies they frequent like Starbucks or McDonald’s, they won’t download an app for every single restaurant they visit. Plus, mobile apps are expensive to build and not every mom and pop restaurant can afford to make them.

We are just starting to figure out what people want from restaurant interactions. Perhaps they will be fine with ordering from an actual person wearing a mask and all of this is for naught. But if the virus doesn’t go away or rebounds in a meaningful way, we could all be waving hello to gesture controlled kiosks.

Next

Primary Sidebar

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
 

Loading Comments...