The Growing Threat of AI Impersonation Fraud – Deepfakes

iStock Photo

Imagine a world where anyone’s voice or face can be convincingly faked, blurring the line between real and fake. This is the growing threat of deepfake technology, allowing cybercriminals to impersonate people with striking resemblance.

What are Deepfakes?

Deepfakes are advanced synthetic media that can convincingly imitate people using altered images, audio, and videos1. The term combines “deep learning” with “fake,” reflecting its reliance on complex algorithms to create lifelike impersonations2

The Growing Risks of Deepfake Technology

Cybercriminals are increasingly leveraging deepfakes to impersonate executives, employees, and clients, undermining trust in financial transactions and heightening risks of identity theft and disinformation. Imagine a junior employee on a video call with executives; a convincing deepfake of a high-ranking official could persuade even a cautious employee to comply with fraudulent requests3.

Below are some recent incidents that highlight the risks posed by deepfake technology:

  • The CEO of Wiz, a cloud security company, revealed that a deepfake impersonated him to target employees for credentials4 (October, 2024).
  • A US Senator was targeted by a deepfake impersonating a Ukrainian diplomat on a Zoom call, attempting election interference8 (September 2024).
  • An 82-year-old retiree lost over $690,000 to a scam involving a manipulated Elon Musk video that promised quick returns on investments10 (Aug, 2024).
  • The cybersecurity firm, KnowBe4 fell victim to deepfakes during hiring interviews, inadvertently hiring a North Korean attacker5 (July, 2024).
  • The CEO of WPP, the largest ad firm in the world, was targeted with a deepfake attempting to solicit money and personal details over virtual conferencing6 (May, 2024).
  • A finance worker at Arup, a UK engineering firm, was tricked into transferring $25 million after being convinced by deepfakes impersonating the CFO and other executives during a video call7 (May, 2024).
  • A school principal in Baltimore was placed on leave after deepfake audio featured him making racist comments, which turned out to be a fabrication by a colleague9 (April 2024).

These cases highlight a critical truth: attackers have and will continue to exploit every tool at their disposal, and AI has made it easier and cheaper.

How deepfakes are made:

Creating a deepfake is not as simple as using a photo app; it requires substantial data and computing power. A dataset of images, audio samples, or videos is first collected for the person being mimicked. Then, using the generative adversarial network (GAN), the AI learns how to replicate the person’s voice, appearance, and mannerisms11. Finally, this learned model is applied to generate new content, where the person appears to say or do things, they never actually did11.

There are several methods for creating deepfakes, with one of the most popular methods being the use of generative adversarial network (GAN). A GAN consists of two AI systems in a continuous competition: the generator, which creates fake content that looks real, and the discriminator, which judges whether the content is real or fake11. The generator uses the information it has learned to produce deepfakes that must then deceive the discriminator, which evaluates the generator’s images by comparing them to the real images11. As the process iterates countlessly, each round makes the fake content more and more difficult to distinguish from the real image11

Another method involves autoencoders, which specialize in face replacement and swapping. Deepfake applications use two autoencoders, enabling the transfer of images and movement from one image to another, resulting in realistic-looking content12.

What are deepfakes used for2:

Deepfake technology has a range of applications, both legitimate and malicious. Some notable uses include:

Celebrity face swaps, scams and hoaxes, automated disinformation attacks, election manipulation, identity theft and financial frauds.

What kind of deepfakes exist1:

Textual deepfakes, video deepfakes, audio deepfakes, social media deepfakes, real-time or live deepfakes.

How to spot deepfakes

While detecting deepfakes can be challenging, watch for these common signs12.

  • Limited eye movement, lack of blinking, rigid expressions, and misaligned facial features.
  • Distorted body shapes, jerky or misaligned head and body movements or disappearing body parts.
  • Inconsistent shadows, unusual skin tones, flickering, blotchy patches, and mismatched lighting.
  • Overly perfect hair without texture, blurred jewelry, or artifacts around the neck and face.
  • Poor lip-syncing with speech or mismatched audio.

Combating deepfakes11,13

As deepfakes grow more sophisticated, several strategies are emerging to counter their misuse:

  • Social Media Rules: Platforms like Facebook, Twitter and YouTube are deploying deepfake detectors and tagging or removing manipulated content around sensitive topics. 
  • Detection Technologies: AI tools like DeepTrace, Reality Defender that detect and analyze manipulated media, acting like antivirus for deepfakes. 
  • Employee Training: Educating employees on recognizing deepfakes and using multi-step verification can prevent social engineering.
  • Internal Protocols: Implementing multi-step verification, zero trust policies, and transaction authentication helps prevent unauthorized actions.

Next steps,  

Deepfakes impact individuals, businesses, and governments alike. While strategies exist to combat them, a coordinated effort is essential. Strengthening verification processes and cybersecurity protocols, ongoing education and vigilance will be key in adapting to the evolving landscape of AI-driven fraud14.

 References

  1. Zenarmor. Do you believe your eyes: Deepfake explained. 2024 

https://www.zenarmor.com/docs/network-security-tutorials/what-is-deepfake

2. FORTINET. What is a deepfake?.  

https://www.fortinet.com/resources/cyberglossary/deepfake

3. ClearOPS. Mitigating Deepfake Phishing in Corporate Structures: Essential Steps for AI Governance.  

https://www.clearops.io/website-blog-post/mitigating-deepfake-phishing-in-corporate-structures-essential-steps-for-ai-governance

4. Sarah Perez. TechCrunch Security News article – Wiz CEO says company was targeted with deepfake attack that used his voice. Oct 28, 2024.  

https://techcrunch.com/2024/10/28/wiz-ceo-says-company-was-targeted-with-deepfake-attack-that-used-his-voice/

5. Stu Sjouwerman, KnowBe4 – How a North Krean Fake IT Worker Tried to Infiltrate Us.

https://blog.knowbe4.com/how-a-north-korean-fake-it-worker-tried-to-infiltrate-us

6. Nick Robins-Early. The Guardian. CEO of world’s biggest ad firm targeted by deepfake scam. May 10, 2024 

https://www.theguardian.com/technology/article/2024/may/10/ceo-wpp-deepfake-scam

7. Kathleen Magramo, CNN British engineering giant Arup revealed as $25 million deepfake scam victim. May 17, 2024 

https://ca.finance.yahoo.com/news/british-engineering-giant-arup-revealed-030558740.html

8. Robert Tait, CNN. US senator targeted by deepfake caller posing as Ukrainian diplomat. Sept 26, 2024.

https://www.theguardian.com/us-news/2024/sep/26/ben-cardin-dmytro-kuleba-deepfake-ukraine

9. Paul Schwartzman and Pranshu Verma. Baltimore principal’s racist rant was an Ai fake. His colleague was arrested. April 26, 2024.

https://www.washingtonpost.com/dc-md-va/2024/04/26/baltimore-ai-voice-audio-framing-principal/

10. Stuart A. Thompson. How ‘Deepfake Elon Musk’ Became the Internet’s Biggest Scammer. Aug 14, 2024 

https://www.nytimes.com/interactive/2024/08/14/technology/elon-musk-ai-deepfake-scam.html

11. Bart Lenaerts-Bergmans, CrowdStrike – What is a Deepfake Attack?. April 16, 2024.  

https://www.crowdstrike.com/en-us/cybersecurity-101/social-engineering/deepfake-attack/

12. FORTINET. What is a deepfake?.  

https://www.fortinet.com/resources/cyberglossary/deepfake

13. Matt Seldon. AI and Advanced Tech Cybersecurity Industry News: Deepfake Technology Poses Major Threat to Financial Sector; FS-ISAC Issues Guidance. Oct 25, 2024. 

https://www.hstoday.us/subject-matter-areas/ai-and-advanced-tech/deepfake-technology-poses-major-threat-to-financial-sector-fs-isac-issues-guidance/

14. Homeland Security. Increasing threat of Deepfake Identities.  2021 

https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf

Join the Conversation

10 Comments

  1. Exciting post Cynthia! I really enjoyed reading about this topic and appreciate the various sources of past deep fake technology attacks. Looking more closely at the CEO case where he was the target of an elaborate deepfake scam that involved an artificial intelligence voice clone, I am surprised by how often this occurs to higher executives. Given such attacks, perhaps increasing education not only to employees within their companies but also targeting higher officials within the company may increase security given the amount of information that exists in their control. Thinking back at some of my own experiences, I sometimes have trouble identifying an AI-generated image over a real image which concerns me about what the future holds for us in the realm of security. Perhaps, as you mentioned, incorporating detection Technologies such as DeepTrace and Reality Defender that detect and analyze manipulated media may be significantly beneficial in industrial spaces.

  2. Thank you Cynthia for this interesting post!
    It is not surprising the direction the technology space is headed with the fast pace of technology growth and advancement. With the rising cases of deepfakes, more awareness and training needs to be provided to everyone both at work and at home. Imagine the pain deepfakes could cause the unsuspecting senior who is hoping to receive a service maybe from a bank but turns out to be a deepfake scam, I think we are in the era of Zero-trust! I hope the awareness of deepfakes reaches more population faster before they become victims.

  3. This is so interesting, Cynthia. Can you imagine a whole cybersecurity firm like “KnowBe4” being caught in a deepfake attack? It’s seriously alarming. Your blog is eye-opening and informative. A reliable solution to deepfakes, could be a huge step forward in tackling romance scams and so much more.

    Romance scams have been around forever, but they’re still a serious problem, and deepfakes are probably making things worse. These scams often target lonely or older people, tricking them into believing that their attacker is someone they know and trust, sometimes even posing as their children, friends or relatives.

    MIT has a helpful article with tips on spotting deepfakes (https://www.media.mit.edu/projects/detect-fakes/overview/), and I think another important piece of the puzzle is educating our loved ones. If we help our families understand how deepfakes work and how to spot them, we can protect them from falling victim to these increasingly convincing scams.

  4. Very well articulated, Cynthia.
    Similar to any emerging technology, Deepfake is another example of a technology controversial debate. An argument can arise if a knife is considered a kitchen tool or a harmful weapon. Deepfake, as stated by Whittaker, comes with astonishing opportunities such as enriching media content for consumers who may benefit from removing language barriers to deliver cross-cultural content, (Mustak, M. et al.,2023) [1]. Also in medical cases where it can provide a voice to people who have lost their own due to medical conditions (Whittaker et al., 2020) [2]. From a security lens, I am fully supportive of the precautions you have raised regarding committing to internal policies and identity proofing. Those are very crucial mandates to ensure a safe and secure authentication process happens before actions are taken into effect.

    [1] Mustak, M. et al. (2023) Deepfakes: Deceptions, mitigations, and opportunities. Journal of business research. [Online] 154113368-.
    [2] Whittaker, L., Kietzmann, T. C., Kietzmann, J., & Dabirian, A. (2020). “All around me are synthetic faces”: The mad world of AI-generated media. IT Professional, 22(5), 90–99.

  5. Interesting post, Cynthia!
    At present, deepfake is one of the major issue in the world. I personally believe that the deepfake is a technological marvel, but the threats deepfake poses across several sectors are significant. The examples shown in the article highlight how fast deepfakes have changed from novelty to advanced tools used for fraud, tricking people, and even political interference. However, the technology behind deepfakes is fascinating; it also raises security concerns, particularly at present, where digital interactions are increasing day by day. As a cybersecurity professional, we have to come up with the technology and solutions to successfully tackle the issues related to deepfakes and we also have to make sure that this technology must be used for betterment rather than tricking people and doing frauds.

  6. This is such a deep dive into deepfakes and the risk they’re becoming, thanks for breaking down the details Cynthia! How wicked it is to see criminals effectively employing deepfakes to fool people and institutions. Deepfake detection services such as DeepTrace and Reality Defender both seem like a pretty good defense against these. I love the analogy that they are deepfake antivirus software. So quickly, though, is the technology, that it’s hard to see how detection tools keep up.

  7. Cynthia!Cynthia! It again connects back to how AI is being managed and controlled in terms of privacy. Deepfake can be used for the worst unexpected activity, which can cause significant damage to anything unlimitably. And thus, that can ruin trust in digital interactions. Then this threat can only be combated by a strong awareness and technology enhancement.

  8. An informative post about highlighting deep fake. It’s one of the most talkative topics of recent times as many people are getting affected by it. Additionally, this post describes what material is being used for making a deep fake item and the types of it, like substantial data, computing power, and AI. From the recent article about deep fake, we can see that, mostly celebrity identity theft, automated disinformation attacks, election manipulation, and financial frauds. Again, by looking into the ways of identifying deep fake and the preventive measures of stopping it, I think deep fake is becoming harmful day by day, which is affecting people’s personal and financial parts badly, and it is high time everyone should take enough safety plug against it. 

  9. Very interesting post! AI impersonation sounds very scary and close to us. I love that all the examples that show us anything could be the victim of the crime! From the elder retiree to school principal, to finance worker to even cybersecurity firm and company CEO were the victims of the deepfake technology. This tell us that the realistic of the AI impersonation and the difficulty to differentiate from really person. While I was reading your post, it made me imager the dark dystopian society where we do not even know what is real what is fake. I know it sounds unrealistic and far from us, but who knows what can happen in 10 years where technology is growing at full speedy.

  10. Great topic Cynthia.
    with the advancement of technology in this field especially with deepfake this is really a great concern to humans in general and to the cybersecurity space as well. what should be done is to create more awareness to the general public and ourselves as well and educate people more on how to always protect our data and assets from cyber criminals.

Leave a comment