Alexa vs. Alexa: New Vulnerability With the Amazon Echo?

Alexa might not need your voice command to play Despacito anymore! A new vulnerability, that can be exploited in the Amazon Echo has been found by researchers at the University of London and University of Milan (this will also be referred to as Esposito et al. in the future).

Retrieved from: https://www.theverge.com/2020/9/24/21452347/amazon-echo-4th-generation-features-price-release-date-alexa

The Amazon Echo, similar to the Google Home, is a smart speaker. It operates through voice commands, and is able to do a plethora of tasks, including controlling household “smart” appliances, setting alarms, sending emails, shopping, and playing music. Because of the large presence, and access the Amazon Echo has on personal information, any vulnerabilities could have disastrous consequences.

What is Alexa vs. Alexa (AvA)?

Alexa vs. Alexa, or AvA, is a new term coined by Esposito et al., and entails multiple different ways that the Amazon Echo in particular can be subject to malicious attackers, but this can also likely apply to other smart speakers as well. It works by making Alexa, the virtual assistant in the Amazon Echo, say commands to itself, making it possible to alter emails, smart appliances, and buy products off of Amazon, all unauthorized. This can work through either a Bluetooth device, or through radio.

What about requiring verbal confirmation/volume decreases?

The researchers were able to negate the requirement of confirmation for some commands by having Alexa say yes after a pause. In order to combat the volume decrease that arises when the Echo perceives someone speaking, they are able to take advantage of a vulnerability known as the Full Volume Vulnerability, which actually stops the Echo from turning the volume down.

Retrieved from: https://www.wired.com/story/hackers-turn-amazon-echo-into-spy-bug/

Invasion of Privacy

If that wasn’t enough, malicious attackers could also make an application that runs in the background, while it can overhear your commands. It then responds to your commands in the voice of Alexa in such a way that while eavesdropping, can make it seem as though you are just interacting with Alexa. This can allow for multiple issues: attackers can listen in on all information, potentially sensitive, provided, and they can provide you with incorrect information, such that it removes suspicion.

What does this look like?

Here’s a video of the authors demonstrating how the various commands work:

Are there any weaknesses to this attack?

A clear weakness that presents itself is that due to the nature of Bluetooth, if using this method, attackers need to be near the Echo to go through with the attack. Additionally, in response to the paper, Amazon had changed functionality to make the Echo resistant to commands presented through a radio.

How can I protect myself?

A recommendation presented by the authors is that in order to reduce the likelihood of these attacks occurring, it is very important to mute your microphones when not using the Echo, or set it so that the microphone only turns on when you are near it, so that you can hear commands if they arise. Additionally, through the Alexa app, you can delete voice recordings, reducing the likelihood of commands coming from the Echo itself, and it is possible to cancel a skill by giving a verbal command.

References

https://arstechnica.com/information-technology/2022/03/attackers-can-force-amazon-echos-to-hack-themselves-with-self-issued-commands/

https://arxiv.org/pdf/2202.08619.pdf

https://www.bitdefender.com/blog/hotforsecurity/alexa-hack-yourself-researchers-describe-new-exploit-that-turns-smart-speakers-against-themselves/

https://www.tomsguide.com/news/amazon-echo-security-loophole-exploited-to-make-them-hack-themselves

Join the Conversation

24 Comments

  1. I own a google home mini and I honestly always keep the microphone on, but after reading this post I just turned it off lol. These AI’s are not perfect and are still evolving. It seems unreal that there is also a way to invade someone’s privacy through a Bluetooth connection, considering bluethooth was invented back in 1994. It is true that everyday hackers evolve and find new ways to attack but it is astonishing that the authors of your source were able to find a loop hole in bluethooth. From now onwards I will surely keep an eye on my google mini. Good post!

  2. Given the complicated AI deployed in Alexa, security breaches can happen through a small, static bug that could be introduced which build up into a disaster due to the evolving nature of the code. This does not even require much involvement of the hackers, all they need is a one time access to the network to install the malware. This is a serious failure point for such systems and should be taken care of. This is a comprehensive blog, well done! I liked the final sections where you highlighted some of the weaknesses as well as measures for protection. Very useful pointers.

  3. Interesting post!
    My parents used to have a Amazon Echo, and they rarely used it or interacted with it but it was never turned off or had its settings changed from the initialization. Personally I would never get an Amazon Echo, Google Home Mini or any other smart device that has the microphone always on, it makes me feel creeped out. I do think Smart devices are useful though in other places, like my Wifi Mesh Router is part of Amazon and does some ad blocking automatically and has a built in firewall that blocks suspicion websites from being accessed.

  4. Oh wow, as if these weren’t a security concern already. It always worried me to have something around that was not only always listening but also had the capacity to connect to the internet, and knowing that they can now essentially parrot commands back to themselves to bypass security, it’s even more worrying. I enjoy smart technology, and how convenient it makes things, but I agree that due to concerns like this it should at least have the option to limit capabilities of the device, like turning off or restricting the microphone.

  5. Very relevant and informative post here.
    I currently have an Amazon echo and I do notice how sometimes it picks up on what I am saying, even though I don’t initiate it (not saying ” Hey Alexa”). Sometiems I get confused on why that happens, and unfortunately many times I choose to ignore it. However, I will implement those recommendations, so thank you for that. It looks like these echo speakers and other similar electronics are more dangerous than most people think it is.

  6. Awesome post! This post sheds light on the fact that “smart” and “security” are not always synonymous with one another, when in reality, they should be, especially in the cybersecurity domain. In recent years, there have been concerns regarding the proliferation of insecure smart-home devices online and the lack of government oversight to protect consumers from such devices. Such vulnerabilities in smart-home devices afford hackers the ability to both manipulate and control audio and video from the devices. Furthermore, it permits hackers the ability to download and delete files. For example, researchers identified a security flaw in another smart-home device. It was identified that there were flaws in smart doorbells and cameras from device maker Geeni, which allowed hackers to get in without leaving any signs that the device had been accessed. Although not relevant to Amazon’s Echo, the vulnerability still results in a compromise of consumer privacy, security, and safety. If we are paying for “smart devices”, it is critical that they offer safety and enhanced privacy, as opposed to withholding weakness that can be exploited to undermine the security we should expect from such devices. In this case, it would be better to stay away from smart-home devices altogether. Even more concerning, is that smart-device vulnerabilities, require a fairly low-level of skill to be exploited for malicious purposes, and thus, such attacks can become prevalent, if quality control is not adopted to ensure such devices are restricted in some form.

  7. This post hits the nail on the head as to why I do not trust products like these. I will never get something like this in my entire life, because the vulnerabilities seem endless and too high a risk for a small convenience. The fact that hackers can listen in on your private conversations alone, makes the risk way too high for me. If I want to check the weather I will just look at phone instead of buying one of these. I knew from the moment that these types of products were released, that they would have many high risk vulnerabilities and did not trust for a minute that they would not listen to private conversations. I highly recommend anyone who owns one of these products to throw it in the incinerator to protect your own privacy.

    1. I full-heartedly agree with this comment, its really not worth trading your privacy for convenience!

      The fact that someone can hack a smart speaker and make it appear as though Alexa is responding to your commands is very concerning. Muting the microphone of the speaker is a good recommendation, but I honestly think the best practice would be to get rid of the smart speaker.

  8. Hi,
    Ever since these home AI’s became very famous, I was never fan of these ideas, it is probably because I’ve watched too many movies about these AI’s getting hacked and runing our lives. After reading this article, I am becoming more aware and being more afraid to use these AI’s because the idea of these AI’s hearing everything I and my family do really creeps me out and I am sure everyone would feel the same thing. These AI’s really do sound convenient but I don’t think I will install them to my houses any time soon.

  9. This was an interesting piece to read, while technology improvements have made life simpler for us throughout time, they also have drawbacks. The trade-off between efficiency and privacy is shown by voice assistants. The technology still has its flaws. There is no security against further intrusive use of voice assistants without proper regulation. The gadgets are always on, even when they are not awake and are constantly listening.

  10. AI’s turned out to be exceptionally popular, I was never a fan of these thoughts, it is presumably in light of the fact that I’ve watched a lot of documentaries and movies and seen how people misuse technology but then again I was always fascinated by the power of technology and I believe it can one day control the human population. due to current problems of AI I am turning out to be more more hesitant to utilize these AI’s but then again I am excited to see how we can use AI and use it to our advantage to make our lives easier and use it for a good cause.

  11. This post makes me happy I don’t have one of these at home! Although these smart devices could be slightly convenient for certain tasks, privacy concerns seem to be outweighing the benefits it would give. Since you mentioned a huge weakness to this type of attack, I wonder how often these attacks occur. And another thing to maybe consider if better positioning of the device can possibly lower attacks. Obviously this may not be possible for everyone, but trying to keep the bluetooth range within their residence could be something Echo owners can consider.

  12. I personally do not own an Amazon Echo or Google Home, but my members of my family do. And after reading this post, I will be advising them of the vulnerability and urge them to mute the microphones and or even discard of them. While technological advancements have made life easier, and I appreciate how convenient it makes everyday living, it also has its downsides, such as in this situation with the Amazon Echo, which puts it’s user’s security, privacy, and safety at risk.

  13. While these smart assistants have their uses and advantages, situations like these might outweigh the advantages. I just also found out that you can connect your bank app to your amazon echo so with a vulnerability like this, attackers might not have a problem emptying your bank account then. This was a very insightful post!

  14. Interesting post. I always wondered just how safe these smart devices such as the Amazon echo are. Since people use them for a variety of things, it’s obvious that they store a lot of personal information and so if they were to get leaked somehow, it would be very bad. The vulnerability described in this post was very interesting to read about. I had no clue that something like this could be done to tamper with the Amazon echo. Hearing things like this is what makes me not want one of those, even though they sound fun and convenient to use.

  15. I’ve always seen comments on tech reviewers’ channels about how whenever they’re showcasing the “ok google” functions of a device, it also triggers the google assistant for the viewer. It’s funny how the concept of this attack has been present for a while, yet nobody seemed to have thought of a way of executing it in such a malicious way. The attack where it can spoof an alexa and intercept information is quite scary especially due to how it can give you false information. I’m just glad that this vulnerability can still be caught if you’re attentive enough, who knows how bad it would be if it was undetectable without software debugging.

  16. Good Post! My family owns 2 Google Home devices, one of them is next to me as I type this. This is an interesting, frightening, and rather unique form of cyber attack. Due to the increase in the usability and power of these types of smart speakers, it only makes sense that attempts to breach their security would increase as well. I have to say that these attacks are rather creative as they are simply exploiting built-in voice command functionality as opposed to other types of attacks that breach software based upon invasive outside measures.

  17. This is an interesting post! Smart speakers are a very popular type of smart furniture now, and are loved by many young people. It is a very novel thing to be able to control the device just by talking, but the issues of privacy and information security cannot be missed.

  18. Vulnerabilities like this HAVE to be addressed. When a device that can hear you at any time in most places of your home is revealed to have any exploit this bad, it CANNOT be left alone. Is there any statement from Amazon about the vulnerability or what they’re doing to combat it, or are they keeping silent?

  19. This was a fascinating post to read since, while technological advancements have made life easier for us over time, they have also brought with them problems. Voice assistants demonstrate the trade-off among efficiency and privacy. The technology is not without problems. Without effective regulation, there is no safeguard against future intrusive usage of voice assistants.

  20. My Amazon Echo or Google Home are not in my possession, but members of my family do. And now that I’ve seen this piece, I’m going to warn them about the vulnerability and advise them to turn off the mics or perhaps throw them away. While technology improvements have made life simpler, and I appreciate how handy they make everyday life, they sometimes have drawbacks, such as in this case with the Amazon Echo, which puts the security, privacy, and safety of its users at danger.

  21. I honestly cannot believe that it was that easy. All they had to do was make Alexa talk to herself, and then all of that access was theirs. I wonder if you could make this work with other smart speakers like the google home. In any case, in your opinion do you think that this was an actual oversight, or was it more the developers thinking that there was no way that anyone would ever try to exploit this kind of vulnerability. Do you think they even thought about it?

  22. Thanks for sharing the post. As a smart speaker owner, I know that eavesdropping is the main concern in terms of privacy. There was a time when my speaker suddenly laughed by itself without any commands. I wonder if there is a way to check if there is anything wrong with it.

  23. Its reasons like this vulnerability that I stayed away/ was never really interested in getting a smart home device with an AI virtual assistants like the ones offered by Google and Amazon. The only reason I have one now is because I received one as a gift and even then I don’t have it connected to my accounts and just use it as a glamorized alarm clock because of the risks of things such as the one discussed in this article. While this example requires the attacker to be nearby its scary to think the risk of it being able to be carried out from further away, Great article by the way!

Leave a comment