Lucene search

K
threatpostElizabeth MontalbanoTHREATPOST:38074E668BC178B8DDED7F5126F4EB88
HistoryNov 25, 2020 - 2:40 p.m.

Laser-Based Hacking from Afar Goes Beyond Amazon Alexa

2020-11-2514:40:09
Elizabeth Montalbano
threatpost.com
76

Imagine someone hacking into an Amazon Alexa device using a laser beam and then doing some online shopping using that person account. This is a scenario presented by a group of researchers who are exploring why digital home assistants and other sensing systems that use sound commands to perform functions can be hacked by light.

The same team that last year mounted a signal-injection attack against a range of smart speakers merely by using a laser pointer are still unraveling the mystery of why the microelectro-mechanical systems (MEMS) microphones in the products turn the light signals into sound.

Researchers at the time said that they were able to launch inaudible commands by shining lasers – from as far as 360 feet – at the microphones on various popular voice assistants, including Amazon Alexa, Apple Siri, Facebook Portal, and Google Assistant.

“[B]y modulating an electrical signal in the intensity of a light beam, attackers can trick microphones into producing electrical signals as if they are receiving genuine audio,” said researchers at the time.

Now, the team– Sara Rampazzi, an assistant professor at the University of Florida; and Benjamin Cyr and Daniel Genkin, a PhD student and an assistant professor, respectively, at the University of Michigan — has expanded these light-based attacks beyond the digital assistants into other aspects of the connected home.

Alexa, Siri, Google Smart Speakers Hacked Via Laser Beam

Demonstration of Light Commands vulnerability and exploitation on MEMS microphones.

They broadened their research to show how light can be used to manipulate a wider range of digital assistants—including Amazon Echo 3 — but also sensing systems found in medical devices, autonomous vehicles, industrial systems and even space systems.

The researchers also delved into how the ecosystem of devices connected to voice-activated assistants — such as smart-locks, home switches and even cars — also fail under common security vulnerabilities that can make these attacks even more dangerous. The paper shows how using a digital assistant as the gateway can allow attackers to take control of other devices in the home: Once an attacker takes control of a digital assistant, he or she can have the run of any device connected to it that also responds to voice commands. Indeed, these attacks can get even more interesting if these devices are connected to other aspects of the smart home, such as smart door locks, garage doors, computers and even people’s cars, they said.

“User authentication on these devices is often lacking, allowing the attacker to use light-injected voice commands to unlock the target’s smartlock-protected front doors, open garage doors, shop on e-commerce websites at the target’s expense, or even unlock and start various vehicles connected to the target’s Google account (e.g., Tesla and Ford),” researchers wrote in their paper.

The team plans to present the evolution of their research at Black Hat Europe on Dec. 10, though they acknowledge they still aren’t entirely sure why the light-based attack works, Cyr said in a report published on Dark Reading.

“There’s still some mystery around the physical causality on how it’s working,” he told the publication. “We’re investigating that more in-depth.”

The attack that researchers outlined last year leveraged the design of of smart assistants’ microphones — the last generation of Amazon Echo, Apple Siri, Facebook Portal and Google Home — and was dubbed “light commands.”

Researchers focused on the MEMs microphones, which work by converting sound (voice commands) into electrical signals. However, the team said that they were able to launch inaudible commands by shining lasers — from as far as 110 meters, or 360 feet — at the microphones.

The team does offer some mitigations for these attacks from both software and hardware perspectives. On the software side, users can add an extra layer of authentication on devices to “somewhat” prevent attacks, although usability can suffer, researchers said.

In terms of hardware, reducing the amount of light that reaches the microphones by using a barrier or diffracting film to physically block straight light beams — allowing soundwaves to detour around the obstacle — could help mitigate attacks, they said.

Put Ransomware on the Run: Save your spot for “What’s Next for Ransomware,” a ****FREE Threatpost webinar_ on __Dec. 16 at 2 p.m. ET. _****_Find out what’s coming in the ransomware world and how to fight back. _

Get the latest from world-class security experts on new kinds of attacks, the most dangerous ransomware threat actors, their evolving TTPs and what your organization needs to do to get ahead of the next, inevitable ransomware attack. ****Register here_ for the Wed., Dec. 16 for this _LIVE webinar****.