As Deepfake gets Deeper, Security Risks Heighten

An emerging social engineering attack combines aspects of both misinformation and cyberattacks compromising data integrity: deepfakes.

Deepfake is a term that combines the words “deep learning” and “fakes,” which refers to synthetic videos, images, and audio recordings generated through deep learning AI techniques. While there is a positive side to the deepfake when accompanied with consent of the person depicted, In the wrong hands, deepfakes can cause considerable damage.

How could deepfakes compromise security?

Deepfake attackers attempt to impersonate a person or persons of authority to spread misinformation or manipulate others into providing access to confidential  data and funds.


FBI warning https://www.forbes.com/sites/glenngow/2021/05/02/the-scary-truth-behind-the-fbi-warning-deepfake-fraud-is-here-and-its-serious-we-are-not-prepared/?sh=5834dbeb3179

In March 2021, the FBI released a warning about the rising threat of synthetic content. The FBI warns that attackers use deepfake technology to create highly realistic spearphishing messages. It is expected that attackers will supplement voice spearphishing attacks with audio deepfakes aimed at persuading a specific individual to share or allow access to personal or corporate information. Additionally, the FBI warned about Business Identity Compromise (BIC)- a new cyberattack vector that evolves from Business Email Compromise(BEC). BIC uses audio deepfakes to create “synthetic corporate personas” or impersonates existing employees to elicit fraudulent funds transfers.

https://www.pcmag.com/news/fbi-dont-fall-for-this-money-transfer-video-chat-scam?amp=true

More recently, the FBI issued another warning about an increase in fraudsters exploiting virtual meeting platforms. Some schemes involve impersonating company executives using deepfake technology in video meetings by hijacking their video meeting accounts. The rise of video conferencing during the pandemic has given cybercriminals a new avenue to trick employees into wiring company funds.

Deepfakes as a threat to organizations

An employee of a UK CEO was defrauded into transferring US$243,000 to a Hungarian supplier’s bank account by a voice fake in 2019. It is believed that the threat actors used commercial voice-generating software to carry out the attack. This was the first known example of a deepfake being used in a scam.

In 2021, A manager at the bank received a phone call from one of the bank’s directors asking for a $35 million transfer to fund the acquisition. In reality, it was not the director who called. It was a deepfake of the director’s voice.  By the time the bank became aware of the error, the funds had already been lost.

Deepfake technology is becoming more accessible and easier to use, posing greater risks to organizations. Millions of dollars have already been scammed with audio deepfakes, and the deepfake technology is expected to get more sophisticated. Moreover, deepfakes are not limited to spearphishing attacks or BICs. There have already been video deepfakes that bypass facial recognition technology, and they will soon be able to bypass voice recognition technology as well. With technology that can fool authentication factors, such as biometrics, there is a much greater risk of security compromise. Organizations should update their security protocols as the potential risk grows.

How to protect Against Deepfake Attacks

Employee Training

Strengthen your first line of defence against deepfakes by training staff to spot them

Trust but Verify

To detect an attack before it can cause any harm, implement protocols that specify verification procedures for suspicious communications

Automated Detection

Automated detection can also be achieved with the same algorithms used to create deepfakes

Response Strategy

Deepfakes should be handled in incident response plans, and stakeholders need to know how to respond when they are attacked

References

https://www.entrepreneur.com/article/414109

https://www.forbes.com/sites/glenngow/2021/05/02/the-scary-truth-behind-the-fbi-warning-deepfake-fraud-is-here-and-its-serious-we-are-not-prepared/?sh=5834dbeb3179

https://www.pcmag.com/news/fbi-dont-fall-for-this-money-transfer-video-chat-scam?amp=true

https://www.accdocket.com/deepfakes-get-deeper-security-risks-heighten

https://www.pandasecurity.com/en/mediacenter/technology/deepfake-fraud/#:~:text=Deepfakes%20are%20videos%2C%20images%20or,been%20manipulated%20by%20AI%20technology.&text=This%20has%20become%20a%20growing,of%20misinformation%20and%20fraud%20scams

https://builtin.com/cybersecurity/deepfake-phishing-attacks

Join the Conversation

61 Comments

  1. What an interesting article. As fascinating as AI technology can be, I have always wondered about its malicious uses. It seems almost impossible nowadays to authenticate the other party with how scarily accurate deepfakes can be. By simply pretending to be someone, thieves managed to steal a whopping $35 million. That is insane to me. I do hope that in the future that while we continue to improve technology that we don’t forget to improve security as well.

  2. It’s quite amazing seeing how far technology has come in the recent years, but of course there are always consequences as there will be those who misuse the technology for malicious purposes. The technology is also so easily accessible by anyone, making it that much more dangerous. From the prevention methods against deep-fakes that you listed, they seem to be great strategies for large companies that have the time and resources necessary to implement these preventative measures, but I think it would be much more difficult to protect the general public from misinformation and scams that abuse this technology. I wonder if it would be possible for applications to automatically detect deep-fakes and notify the user that the video they are watching is a deep-fake, but that seems very expensive, so I’m not sure if it would be feasible. I’m interested in seeing how issues like that will be handled as the technology improves.

  3. This was an interesting read! It is scary to think how deepfakes are getting better and better. With the generative and discriminative networks getting better, it’s only a matter of time before a deepfake becomes indistinguishable from the real thing and fools the human eye. And the problem with the technology is that it is easily accessible. It’s interesting to know how voice can be deepfaked as well.

  4. I still remember when the Professor showed us a deep fake in the lecture, at first I also believed that it was real. Clearly technology is growing and getting better everyday, we need to keep up with the latest news about what’s happening around the world. Otherwise we can also be a victim of above mentioned scams. Clearly FBI is trying their best to raise awareness about this type of scam but people who do not tend to keep up with the news get scammed anyways.

  5. Hi, very interesting post here. I am glad that you have a section for protecting against deepfakes. I know that there can be a lot of misinformation spread and people can impersonate others. We saw one in the earlier lectures this semester, even though that one was easier to “catch” or detect. Unfortunately it can create a bad image for some people, especially those with a lot of publicity. People just have to be more aware on what they see online and learn how to spot these deepfakes.

  6. This was a pretty informative post to read! I still remember seeing my first deep fake and having no idea that it was fake at all, so it is pretty terrifying to imagine what could happen when people get even better at making deep fakes. The one in the lecture, while easier to see that it was fake, is still a good example on how technology can evolve rapidly and be used for bad intentions.

  7. As deepfake technology progresses this post seems to highlight how important authentication of people is and what are the growing challenges, as before with authentication you could probably authenticate someone with their voice or face but now it seems like you need something stronger like multi factor that uses passwords coupled with a verification message. These deepfakes seem like they reinforce phishing attacks as they could impersonate someone with high prestige and maybe succeed that way. I imagine that if the Elon Musk Twitter crypto scam had a deepfaked video many more people would’ve been scammed and sent funds to the crypto wallet hoping to double it.

  8. Great post indeed! Deepfake is real. Alongside the discussed topics in this post, deepfake is now widely being used in pornography as well. Many people out there simply become victims without even realizing it. The most embarrassing situations are especially for the celebrities as the deepfake technology is being used to targeted media celebrities–simply altering face with a celebrity face in pornography. This way the porn industry is grabbing more viewers and making money.

  9. This was one interesting topic to learn about! Hearing these stories about technologies getting used as a wrong method really makes me worry about Internet. I also felet bad for the manager that have lost $35 million dollars, because Deepfakes looks very realistic and being very hard to determine whether they are real or not so we cannot blame him for the incident. Also I think its against people’s right to create a deep fake using someone’s face because it can ruin its reputation!

  10. I think it is interesting to see the advancements in deep-fake technology, While Deepfake technology was always a threat to the online security of individuals and organizations, Now with real-time Deepfake tools, It shows how little we can trust information online, now including live content. Seeing a blog like this makes me wonder what sources to trust in a time like this where authenticated people might not be able to distinguish between a real message or a Deepfake, causing some loss to an organization like the example given above. Also when you think about the amount of people that use un-verified sources on apps such as tiktok and other social media platforms to find information and develop beliefs, Deepfake technology certainly has the potential to become a very dangerous weapon in the wrong hands. Great blog!

  11. This was super interesting! I’ve always thought of the risks of deepfakes as just being those to trust in media, information about global events, etc, but very interesting to see how it could be used in more traditional attacks. Certainly adds something to add to response prevention and readiness protocols! I would be curious as to the accessibility of the technology, and if it will only used on big companies in the near future (i.e. justifying the cost), or if it is accessible enough to expand to lower hanging fruit. For example, the traditional “grandparent” scam could be much harder to avoid if the attackers are also able to create a fake video of the grandchild.

  12. This is interesting and scary as the same time. I usually receive scam calls impersonating banks or CRA officers. Since the voice are mostly automatic or over-dramatized, it is easy to know it is fake. However, if they can make the voice sound more real or even from a real person (like your bank consultant), then it would be a huge trouble. I think most agency now contact via email so it is safer to check your email or contact the agency by the official number to verify any information.

  13. That was an interesting post to read! I got to know that there is a high-quality of deepfake fabrications nowadays that might be used for a number of malevolent purposes, posing a threat to our security or even jeopardising election and democratic process integrity. Its so scary to get the idea that someone can create falsified content, replacing or synthesizing faces, speech, and manipulating emotions. Overall, a well-written post!

  14. It is interesting to see AI technology used during these heinous acts of fraud. It is becoming easier for groups and individuals to acquire this technology and use it for their own personal benefit. The fact that they are now becoming more capable of bypassing security measures is very concerning. I feel like this tech will be making this ‘deepfake’ even easier to perform for every user. Perhaps AI should also be used to play a defensive role against these scenarios and warn the user if the fake user is indeed an impersonater.

  15. This is very interesting to me that deepfake is being used for hacking and robbery, as I have only ever heard of it being used for disinformation or humour. It makes sense that deepfake would be used by scammers, although I am very surprised that audio deepfakes have already been able to rob millions of dollars.

  16. This was an interesting topic to read! When utilized for the correct objectives, such as expanding educational and artistic potential, synthetic media may be a powerful instrument. On the other hand, it produces a false narrative about individuals, and attackers may use deepfakes to trick someone into handing over personal data and information. Deepfakes are easy to fall for since people have a tendency to believe their own eyes and ears, and deepfakes appear to be too good to be true. Without consent, deepfakes pose a threat to psychological well-being, political stability, and business disruption. This is an interesting blog!

  17. I believe it’s fascinating to watch how deep-fake technology has progressed. While deep-fake technology has always been a threat to individuals’ and organisations’ internet security, real-time Deepfake tools demonstrate how little we can trust information online, particularly live material. Seeing a blog like this makes me worry what sources to trust in a time like this, when even verified personnel may be unable to tell the difference between a genuine message and a Deepfake, resulting in financial loss to an organisation like the one mentioned above. When you consider the number of individuals who rely on unverified sources on apps like Instagram reels and other social media platforms to gather information and form opinions. In the wrong hands, deepfake technology has the potential to be a very destructive weapon. What a fantastic blog!

  18. One promising piece of news in regards to DeepFakes is companies such as Facebook & Microsoft have developed and tested internal tools to detect DeepFakes with a promising level of accuracy. This may lead to new software & tools that businesses may have to purchase as part of their IT infrastructure to filter out DeepFakes.

    If any state-actor starts employing DeepFakes as part of their cyber warfare suite, the lines start to become even more blurred as any government sanctioned activity has much more funding and resources available compared to an individual. The thought of state sanctioned fake news/propaganda & deepfakes is terrifying it can further isolate people into their echo chambers and cause increased internal division & strife to any country.

  19. Great post. As threatening as they are, the technology behind DeepFakes certainly is interested. The story about the $35 million transfer is particularly scary to think that a manipulated piece of audio / video could cause that much loss. I wonder how much money in total has been lost due to transactions or scams involving DeepFakes. I feel like as DeepFakes become more and more believable, and thus harder to detect, people will naturally stop trusting video as a “source” for proof.

  20. I’ve been pretty familiar with deepfakes when it came to faces, but I had never thought of using a deepfake to impersonate a voice! I feel like deepfakes might end up becoming an even bigger threat in the near future as technology gets better and more accessible. But I’m wondering what else can be deepfaked since I hadn’t even thought of voices. However, on the flip side, deepfakes can make some really cool content. For instance, one of the later episode of The Book of Boba Fett on Disney+ (spoilers for anyone wanting to watch the show). Disney tried deepfaking Luke at the end of The Mandalorian, but it honestly didn’t work well. Jump ahead to the aforementioned episode, they did an impeccable job with Luke, even using AI to synthesize Mark Hamell’s voice (which I honestly thought was just a voice actor at first). So yes, deepfakes are a concerning threat that may only get worse over time, but it also has some excellent uses too, to the point where we may be able to get any actor to play in any show/movie thanks to deepfakes.

  21. This shows us how powerful a computer is, as it is almost able to mimic another person online and the scary part is, it has worked. This reminded me of the incident when Twitter accounts of famous people like Bill gates were hacked and how easily scammers were able to scam people by just 1 suspicious tweet about duplicating money.
    I believe people trust things too much and when they hear/see a person they know online, they drop all their suspicions and even if the deepfake isn’t perfect, it can just be blamed on bad internet.

  22. Great and interesting post to read! I really enjoyed the section you included about how to protect yourself and companies against deepfake scams. It is worrying how fast deepfake technology has advanced while the technology for using bio metrics has not advanced much. I knew about deep fake before but your post really highlighted how deep fake scams work and how scammers are able to pull them off. Worthwhile read!

  23. Nice post! I think that as the technology behind deepfakes evolves, we will have to prepare ourselves for an increase in this new kind of cyberattack. I also believe that the main defense against deepfakes would have to be AI trained to detect deepfakes. From what I’ve seen on deepfakes so far, it’s getting to the point where it’s hard to distinguish with just the human eye. Furthermore, the general population needs to be more educated on this topic as a whole because we’ve all seen what misinformation can do in the news.

  24. I guess the take away from this is that modern problems require modern solutions. Deepfakes have been getting a lot more legitimate in recent years and it does make sense that company executives could be easily impersonated over something like Zoom. I wonder how long it’ll be until employees receive formal training against this kind of thing.

  25. Great post Seyeon. Deepfake has been around for quite some time, and I often wonder how useful it can be to computer science since I can only imagine it being a tool for fraud and scamming. In addition, the more advanced Deepfake technology is, the greater risk it poses to society since it could fool our own ears and eyes without being detected as a fabricated product. If a big bank with high security and authentication protocol is subjected to Deepfake fraud, how can regular people tell the difference? To me, Deepfake has more negative effects than positive and we need more tools and measurements to detect

  26. Very informative post! I’ve been hearing more about deepfakes recently and I can definitely see the ethical issues surrounding them. The fact that voices can be recreated, or that photos and videos can be made to show people doing things that they didn’t actually do makes it even more difficult to spot misinformation. It makes it more difficult to trust the faces and voices you thought you knew—that is terrifying. While I can see deepfake technology being put to good use (perhaps to catch criminals), whether or not it is ethical to do so is another question. Considering how easy it is to create deepfakes, the potential usages will only continue to expand. As such, there is no telling what they could be used for next.

  27. The first time I heard about deepfakes was a couple of years ago and even then the technology was quite frightening in accuracy. Looking at the techology behind deepfakes it is very impressive how far it has come but also the risk of phishing has also grown. There are even websites whose purpose is to make deepfakes of notable individuals. It is interesting that in the bank transfer theft there were not more checkpoints and preventative measures like security keys to authenticate the identity of the speaker. Hopefully, increased security measures can be implemented to prevent future scams.

  28. Thank you for the informative post and mentions on how to protect ourselves against this attack! I find deepfake to be both a fascinating concept and a frightening one. I always marvel at how far we have progressed with this technology and how we were able to create realistic “copycats” of ourselves using code and algorithms. However, as with everything, they come with consequences and people start to use them for their own malicious activities. As a result, deepfake comes out to be more like a weapon than a benefit to society.

  29. A horrifying reality of deepfake tech is the ability for powerful people to use it to get out of trouble, rather than attackers using it to cause trouble. If someone were to gain video or vocal evidence that someone among the elite were doing something wrong, the victim of the attack could claim it were a deep fake and, until someone comes around and confirms it’s not, they escape prosecution. An easy example would be if you managed to get a video of Elon Musk cheating on his wife at a night club. If you release that video to the public, he can claim it’s a deepfake, and even possibly pay off some visual effects company to claim they did an analysis and say that the video was doctored. You lose your credibility and Elon continues with his mistress. This, I think, is a more terrifying way that deepfakes could be used than to ruin someone’s reputation, using them as a way to maintain said reputation.

  30. Great post. Deepfake tech is one of the technologies that I am surprised more people are not concerned about. I think deepfakes are the next thing in scamming and fraud, and as far as I know, very few companies are training people about them. On that note, you mention training employees to spot audio deepfakes. I was wondering what, if any, telltale signs audio is a deepfake people should look out for or any patterns that employers should train their employees about.

  31. Great post. Deepfake tech is one of the technologies that I am surprised more people are not concerned about. I think deepfakes are the next thing in scamming and fraud, and as far as I know, very few companies are training people about them. On that note, you mention training employees to spot audio deepfakes. I was wondering what, if any, telltale signs audio is a deepfake people should look out for or any patterns that employers should train their employees about.

  32. At this point, the advancement of technology is horrifying causing people to lose trust in information systems. It is shocking to see how much a computer can do and how powerful they have become. I could never have thought technologies like deepfakes could ever exist. Whenever my father calls me over phone to do a chore, I do that without even a hesitation that the person over the phone could be someone else impersonating my father. For an average user, I think it is nearly impossible to detect if a video, text or voice is being made with deepfake. The field of deepfake is still quite new, therefore I believe government, schools and colleges should educate more people and spread awareness about deepfake so that people can be more cautious about such vulnerabilities.

  33. Nice post!

    I think that with the current Russia-Ukraine situation going on, we may see deepfakes being used as cyberwarfare. I have read up somewhere that either side may show deepfakes to demoralize the other. For example, Russia could release a deep fake of Ukraine’s president surrendering or just simply use him for propaganda in order to gain support of their military and citizens back home. Again, cyberwarfare being relatively new in this scale — it becomes scary on what can be possible with deepfakes.

  34. The first time I saw deep fakes was on YouTube and they were only being used for fun and entertainment. I never knew that voices could be deep faked as well and that there are scams as well because of this. However, I am aware that as the technological world keeps on advancing, instances such as these are bound to pop up. What matters is that, the world has to be educated and made aware about these scenarios and how everyone should deal with these cases. For an average businessperson, to avoid facing any deep fake audio issues, he or she could take certain precautions. Before making any transactions in companies I believe the transactions between the payer (businessperson) and the receiver should be handled upfront rather than on a call (considering the amount of money that is being transferred as well). The aforementioned example is just one scenario, I am very sure there could be a lot of other cases where deep fakes are being used with malicious intent. It really is unfortunate that some people have to first become victims before letting the world be aware about how to deal with those situations. Devising a strategy to deal with these cases can be hard as well.
    This post was really informative and interesting. Great job!

  35. This is another important article how almost all scientific progress (in this case AI and CGI) have a lot of upsides, but when used with malicious intent also pose some risks and threats. It is impressive to see what is possible to animate in movies, and how much CGI has improved over the last decade, but it is also disturbing to realize that one cannot simply trust videos or voice messages they find online or even in private (video-)calls. All of us have perhaps already seen examples of deepfakes on YouTube and it is frightening how real it looks. Now seeing how this can actually be utilized for frauds and other crimes is disturbing. It is certainly good to be aware of these technologies, on one hand for companies and individuals to not be cheated but also for individuals not to fall victim for misinformation online, which is another very pressing topic at the moment.

  36. This is a very interesting post. I have always found deep fakes to be very cool and fascinating. The amazing part is how real they all seem to be. I’ve seen so many deep fakes of celebrities that I was convinced were real until I read the comments in the video. I always had a feeling, that with malicious intent, deep fakes could be very dangerous and it seems that I was right. The fact that people are using deep fakes to steal millions of dollars is not surprising, but very sad. There needs to be increased security measures in the future to make sure that deep fakes can be identified.

  37. This is an example of AI being used for malicious purposes. I find it difficult to think of ways deepfakes can be used for good (perhaps maybe for parodies or comedic videos). The first one I’ve seen is one of Barack Obama, and I could not even tell it was a fake until I was told. There is a website called https://deepware.ai/ to check if what you are witnessing is a legitimate video, or a deepfake. I would not be surprised if scammers do social engineering and create deepfakes of people to trick their close ones into sending them money.

  38. This is an interesting post. I find the potential of deepfakes to be used maliciously absolutely horrifying. Knowing how reliant our society is on appearance and voice is for recognition, and knowing that anybodys’ face and voice can be copied makes me terrified. An attacker can easily create a deepfake of any celebrity or influential figure (like a president) and completely slander them. I find it very suprising that there isn’t more security measures to protect against “deepfake attacks”. Hopefully, we can implement more protection against these types of attacks.

    1. For sure! I really do not know if there are any effective ways to not get fooled by deep fakes but I am sure some strategy will be made up in the future. The only advice I can give to people is to be extremely cautious, when they are on the internet.

  39. Great Post!
    I think this post reflects the dark side of the Deepfake technology. Even though I was aware that Deepfake could be abused for unauthorized use, I am very surprised that Deepfake can be used for cyber crimes that can rob millions of dollar! Since this is a fairly new topic, this post was very beneficial to remind people of different ways that Deepfake can be abused.

  40. Very interesting read….
    To be honest, I personally don’t believe that in an actual professional environment with proper protocols to funds management and transactions are at risk yet for it is still user error according to these articles (still victim to social engineering of course); the real issue arrises when attackers will be able to leverage deepfake technologies in real-time conference scenarios. This combined with email/number spoofing is when social engineering will be at its peak, but until then it’s very interesting to see how deepfake AI targets those not educated with these newer and niche tech.

  41. I think a lot of people, me included until recently, have the perception of deepfakes as weird looking memes on social media. However, it is eye opening just how convincing they can be and the damage they can do. The worst part about it is that with regards to misinformation through video, very little can be done in the initial phases to counter this; other than to debunk it through official sources, which takes time, its very much up to the user to verify the authenticity of the video, which can be difficult. Additionally, the resources needed to make deepfakes (pictures/audio of a person) are publicly available for anybody in a position of power (and truthfully deepfakes of these individuals are the most impactful anyhow). Its dangerous enough that I had a CPSC professor last year who refused to record lectures citing fear of deepfakes, and while it was obviously inconvenient, I can’t say I blame him anymore.

  42. This is an interesting post. This is the first time I hear the term ‘deepfake’. I think this method of artificial intelligence through deep learning is similar to imitating people’s biometrics. I guess that there may be an algorithm behind this method to record people’s voices, fingerprints and irises, etc., and when the identity of the person whose features are recorded is to be used for a certain purpose, the algorithm is released to achieve the effect of falsehood. However, no matter how realistic the imitation may be, there may be slight differences from the real features. I think the official can design an algorithm to identify this easily overlooked difference.

  43. Good Post! This post has a large number of comments already, so apologies if some of my observations here have been said already. Something that stood out to me regarding Deepfake technology is how it can fool facial recognition software. I’m sure that anyone who owns a newer IOS device can tell you that their device is unlocked using a quick scan of one’s face. This type of facial recognition technology sounds exceptionally vulnerable to someone using a deepfake to fool it. An attacker may only need a picture of you to successfully steal and unlock your phone. The popularity of IOS devices that use this technology is very high, making the risk involved with deepfakes even greater.

  44. Really interesting post! In some situations, there are certain technologies, weapons, etc, that are just sort of universally regarded as not something that anyone should develop because the potential harms that might arise from it are too great. Do you think deepfake technology should be one of those things, especially considering the potential harms that can arise from misuse of deep-faked images, video, and audio.

  45. I’m surprised the first known example of a deepfake being used in a scam was in 2019 and not earlier. Regardless deepfake technology especially as it continues to improve and become more advanced is quite scary. I can only imagine getting a call and hearing a voice or seeing a video that sounds exactly the same and thinking nothing of it while its a deepfake.

  46. It’s crazy how advanced these attackers have become. Throughout this course the need for multi-factor authentication for literally everything has become of a lot more importance to me then before we started this course. What once was just a nuisance has now almost become a necessity. In this case I would imagine it would be very difficult to tell the difference; i recently read a post about human error being the leading cause for error in cybersecurity and in this case I believe that’s what is essentially being exploited. Hopefully companies can come up with a way to counter act this!

  47. Pingback: PlayStar
  48. Pingback: PlayStar
  49. Pingback: Vape carts
  50. Pingback: endolift
  51. Pingback: webcams

Leave a comment