As virtual assistants such as Alexa, Google Assistant and Siri become more popular, so too will they become a more popular target for hackers. The New York Times reports that researchers in China and the U.S. have discovered a way to surreptitiously activate and command those virtual assistants by broadcasting instructions that are inaudible to the human ear.
From that NYT story:
A group of students from University of California, Berkeley and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website.
This month, some of those Berkeley researchers published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon’s Echo speaker might hear an instruction to add something to your shopping list.
Hackers might not care about your shopping list, but considering 41.4 percent of smart speakers are in the kitchen, it’s important to consider whether they could be used to turn on an oven while you’re out, or secretly start up a video call.
You should read the full Times article for a nice dive into the world of AI, speech recognition and modern hacking techniques. For our purposes here at The Spoon, these security notices are good to be aware of as companies look to use food as a way to get further into — and control more parts of — our homes.
Consider that Amazon, which is behind Alexa, wants to use a series of connected locks and cameras to allow deliveries into your home or car while you’re not there. Or that Google just this week announced more native control over kitchen appliances and a strikingly human sounding AI that can make calls on your behalf to set up appointments. And shortly after launching last year, a WiFi exploit created a security hole in Amazon Key’s connected camera setup.
Now before you toss your Echo into the ocean, you should know that there’s no evidence that this type of attack has ever happened outside the lab. But that won’t stop hackers from trying and improving their techniques.
The point is that it will obviously be incumbent upon companies to be as proactive and transparent as possible in determining and thwarting these type of attacks. But if we as consumers want to turn over more control over our lives to these virtual assistants, then it’s up to us as well to stay vigilant and educated about the new threats their use brings.
Leave a Reply