segunda-feira, 11 de setembro de 2017

How hackers could send secret commands to speech recognition systems with ultrasound




Chinese security researchers have discovered a way to send secret, inaudible commands to speech recognition systems such as Siri, Amazon Alexa, and Google Home using ultrasound.
A team from China’s Zhejiang University devised the technique, which they have dubbed “DolphinAttack”.
Now, it’s important to stress that this is a *potential* problem. Although quite ingenious, when you hear the details you will probably decide that this particular threat is not one to lose much sleep over.
But that doesn’t make it any less fascinating to ahem… hear about.
DolphinAttack relies upon audio commands being sent using ultrasonic frequencies above 20,000hz – way beyond what humans can hear. Although the commands are at a frequency too high for you or I to hear, they can be picked up by a smart device’s microphone.
And this is where things get really geeky.
When the tiny thin membrane in an electronics device “hears” an input frequency by picking up on the sound wave, it not only vibrates at the frequency, but it also creates a weaker harmonic signal. As TechCrunch describes it:
“…say you wanted a microphone to register a tone at 100 Hz but for some reason didn’t want to emit that tone. If you generated a tone at 800 Hz that was powerful enough, it would create that 100 Hz tone with its harmonics, only on the microphone. Everyone else would just hear the original 800 Hz tone and would have no idea that the device had registered anything else.”
In that way, the researchers were able to send commands inaudible to the human ear but which could be “heard” by the microphone.
A brief demo of the attack in action against a Siri-enabled iPhone can be seen in the following YouTube video:
In tests, the researchers were able to demonstrate that the DolphinAttack technique could be used with a variety of devices with built-in speech recognition manufactured by the likes of Amazon, Apple, Google, Huawei, and Microsoft.
So, what could be done with this attack? Well, in their technical paper the researchers argue that the inaudible voice commands could be used in a number of “sneaky attacks”:
  • Telling a device to visit a malicious site that could initiate a drive-by-download or exploit the device via a zero-day vulnerability.
  • Making the targeted device initiate outgoing video/phone calls, opening opportunities for surveillance.
  • Sending fake text messages or emails, publishing unauthorised online posts, adding fake events to the calendar, etc.
  • Turning on the device’s airplane mode, thereby disconnecting wireless connections.
  • Dimming the screen’s display and lowering the volume, thus making it more difficult to determine that other attacks are taking place.
Worried? Perhaps you shouldn’t be. It’s hard to imagine that most of us are ever likely to be at risk from being targeted in this way, especially when you consider that the researchers found that they had to be within at least six feet from the device they were attempting to manipulate in order to launch an ultrasound attack.
Furthermore, simple measures such as disabling or changing a device’s wake-up phrase or restricting what actions it can undertake when locked would severely limit the opportunities for a DolphinAttack to cause mischief. Guides on how to do this will vary from device to device, but here are some instructions for restricting Siri on iOS devices, for instance.
The Zhejiang University researchers will present their full research at a security conference in Dallas next month, but in the meantime, you may well enjoy reading their technical paper.

Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.

Nenhum comentário:

Postar um comentário