Artificial intelligence (AI) as we know it is growing in use at an exponential rate. Specifically, within the cybersecurity field, the rise of such AI technology simultaneously presents extraordinary opportunities and intimidating challenges. While AI can identify and exploit vulnerabilities easily, it introduces significant risks if it does not deploy its own set of security measures.[1] Many organizations today prioritize AI innovation at the expense of security in light of the efficiency and fast-paced results leaving systems vulnerable.[1] Such priorities underscore the need for established security frameworks and ongoing education about the dynamic risks that AI may present within cybersecurity.[1]
According to research from Absolute Security, around 54% of Chief Information Security Officers (CISO) feel their security team is unprepared for evolving AI-powered threats.[1] Furthermore, almost half, or 46% of CISOs believe that AI is more of a threat to their organization’s cyber resilience than it is of use, highlighting AI as a potential danger.[1] Surprisingly, 39% of CISOs have personally stopped using AI due to fears of cyber breaches and 44% have banned the use of AI by employees for the same reasons.[1] While “Out of sight, out of mind” may be one way to solve the problem, this will not help businesses and companies as much as it will hurt them. Instead, as technology evolves, it will be more beneficial to educate cyber security professionals about new threats, conduct regular team meetings to discuss the most current attacks, educate team members on how to mitigate risks effectively and safeguard operations by prioritizing AI security.[1]
Three types of AI Models
Before we delve into how the misuse of AI models is utilized by threat actors in security fields, let’s discuss what these three AI models are and how they can be utilized in the security context.
- Generative AI: These models understand human input and can deliver outputs in a human-like response continuously refining its outputs based on user interactions.[3] Well-known generative AI models include ChatGPT and CoPilot. In a security context, generative AI can be used to generate a human-readable report of all security events and alerts or create and send phishing emails.[3]
- Supervised Machine Learning: These models analyze and make predictions from well-labeled, tagged, and structured datasets.[3] In a security context, supervised machine learning can help analyze all the security and technical data, finding patterns and predicting attacks before they happen.[3]
- Unsupervised Machine Learning: These models are great for analyzing and identifying patterns in unstructured or unlabeled data.[3] In a security context, unsupervised machine learning can sift through and process large volumes of network flows to identify malicious patterns without having an individual do so.[3]
Misuse of Generative AI models by Threat Actors
Cybercriminals use AI to accomplish very specific tasks. Some include:
- Writing hyper-targetted Business Email Compromise (BEC) emails to attack companies using tools such as WormGPT. [4]
- Creating polymorphic malware – new variants of existing malware.[4]
- Scanning and analyzing code to identify vulnerabilities in target systems. [4]
- Creating video and voice impersonations for social engineering attacks.[4]
In this blog post, I will specifically delve into how business email compromise attacks work and how generative AI such as WormGPT is used to accomplish this attack.
Custom models such as WormGPT are being created and advertised for use for malicious purposes. cybercriminals often gain access to such tools through prominent online forums associated with cybercrime. This tool presents itself as an alternative to GPT models as it is designed specifically for malicious activity.[4]
WormGPT uses an AI module based on GPTJ and hosts a range of features including unlimited character support, chat memory retention, and code formatting capabilities.[4] WormGPT was trained on a diverse array of data sources particularly focused on malware-related data. [4] As seen in the above image, WormGPT is an alternative to ChatGPT that allows you “to do all sorts of illegal stuff and easily sell it online in the future” while simultaneously providing anonymity to all users.
In the above-conducted experiment, WormGPT was instructed to generate an email to pressure an unsuspecting account manager into paying a fraudulent invoice.[4] As seen above, WormGPT produced a decent email showcasing its potential for use in phishing and BEC attacks.[4] While other factors, such as the sender’s email address and such must also be taken into account, the content of the email produced by WormGPT is helpful and quite convincing to cyber attackers.[4]
On the other hand, when ChatGPT performs the same task, an error message with redirection towards how to best protect against threats is provided.
Additionally, there is an increasingly unsettling trend among cybercriminals on forums offering jail breaks for interfaces such as ChatGPT. [4] Recommendations within these forums are carefully crafted to manipulate generative AI interfaces to generate output that might involve disclosing sensitive information, producing inappropriate content, or even executing harmful code.[4]
How can we mitigate such issues?
Andy Ward, the VP International of Absolute Security emphasizes the need for organizations to focus on threat protection, deterring attacks, and preparing to defend against cyber threats using AI against AI. [1,5] A few actionable steps outlined below could be taken to address such issues:
- Adopting established frameworks such as Google’s Secure AI framework and NIST AI risk Management Framework to protect companies’ Large Language Models. Such frameworks will help prevent attackers from exploiting prompt injection to manipulate AI into revealing sensitive data or performing unauthorized actions.[2]
- Implementing strategies to establish identities for customers and employees. In today’s world where voices can be cloned within seconds, this poses a significant challenge for remote identity verification, particularly in distributed workspaces.[2]
- Adopting a proactive approach within organizations by leveraging AI for defense to effectively prevent AI-driven attacks. Given that AI systems utilize a network of trained computers to identify and prevent malicious activities on networks and are built to autonomously recognize threats and detect vulnerabilities faster than human teams, ensuring businesses are fully equipped with AI capabilities is crucial to reducing the chances of security breaches and unauthorized access to an organization’s data.[5] Using AI, cybersecurity teams can create a continuous feedback loop of simulated attacks and responsive remediation strategies.[2]
The shortage of cyber professionals leaves security teams understaffed and burned out. Currently, around 71% of organizations have unfilled cybersecurity positions.[2] Focusing on upskilling cyber teams will help address such issues in AI-based defense strategies, enhancing overall security.[2] Furthermore, supporting employees with resources to stay informed and filling knowledge gaps will make for a more engaged and better-equipped team ready to defend, especially for freshers such as ourselves!
References
[3] https://www.securityweek.com/ai-models-in-cybersecurity-from-misuse-to-abuse
Awesome post! AI is definitely evolving and has its pros and cons. I agree that the use of AI increases efficiency; however, everything introduces risks one way or the other. The recommendations for using the NIST AI Risk Management framework is great because a lot of organizations are currently trying to mature their AI governance processes with how widespread it has become. Understanding the balance between leverage the benefits of AI as well as managing the risks is highly crucial for success.
Good post, Keerthana; it is comprehensive enough with a meaningful insight of AI ‘s double-edged role in cybersecurity. while highlighting the benefits of advancement of AI’s technology, the post also mentioned the risk involves .
This post underlines the critical need for organizations to place equal importance on implementing strong security protocols as they do on embracing innovative technologies.
Artificial intelligence is a great invention; however, it has some bad parts too. I think to control this kind of AI-based cyberattack, the AI providers must set some layer of filtering where the user’s request will be filtered. Sensitive requests must be erased with an alert prompt when any user asks for them. Another step can be registering on the AI sites. For example, if any user wants to enter the AI sites, they have to register with an email or phone number so that the person who is using it will be there.
Great articulated post, Keerthana. It is impressive seeing how cyber-attacks have evolved rapidly even in AI just within a couple of years since AI emerged. The AI generative prompt injection is quite similar to SQL injection, however no firewall or IPS can detect it. The AI attacks now demand a new defense strategy. On the same note, A recent AI generative incident was reported in 2023 where a prompt injection enabled an attacker to have a binding contract with Chevrolet, buying Tahoe for $1 (Lewin, 2023)[1]. Many businesses rushed to AI saving on call center agents without knowing the AI cyber-attacks.
[1] Lewin (December 18, 2023). Chevy Dealer’s AI Chatbot Allegedly Sold A New Tahoe For $1, Recommended Fords. Retrieved from: theautopian.com/chevy-dealers-ai-chatbot-allegedly-recommended-fords-gave-free-access-to-chatgpt
You’re absolutely right, AI has significantly changed the landscape of cyberthreats and traditional defense mechanisms like firewalls and intrusion prevention systems which now have become ineffective methods against newer vulnerabilities. The 2023 incident where Chevy Dealers AI Chatbot allegedly sold New Tahoe for $1 is a perfect example of how quickly attackers are finding and exploiting such loopholes. It is concerning to see how businesses are rapidly adopting AI without fully understanding the security risks associated especially in areas where sensitive information such as credit card information and personal demographic information is frequently exchanged. Thank you for your comment Tamer!
Nice job Keerthana! I really enjoyed reading the article. I agree that AI has changed the rules on how the threats work, people used to need more knowledge on the fields to create an attack, but now any regular user with access to this web resource can become a threat to an organization or individual. In my opinion the lack of regulation in some AI tools provides more options to black hat hackers to expand their attack options. It will become harder and harder to protect assets with wider attack tools. New technology will require the use of AI to verify the security configuration as well, so the dependence on AI will increase. Thanks for the information!
Nice job Keerthana! I really enjoyed reading the article. I agree that AI has changed the rules on how the threats work, people used to need more knowledge on the fields to create an attack, but now any regular user with access to this web resource can become a threat to an organization or individual. In my opinion the lack of regulation in some AI tools provides more options to black hat hackers to expand their attack options. It will become harder and harder to protect assets with wider attack tools. New technology will require the use of AI to verify the security configuration as well, so the dependence on AI will increase. Thanks for the information!
Nice job Keerthana! I really enjoyed reading the article. I agree that AI has changed the rules on how the threats work, people used to need more knowledge on the fields to create an attack, but now any regular user with access to this web resource can become a threat to an organization or individual. In my opinion the lack of regulation in some AI tools provides more options to black hat hackers to expand their attack options. It will become harder and harder to protect assets with wider attack tools. New technology will require the use of AI to verify the security configuration as well, so the dependence on AI will increase. Thanks for the information!
Great Post Keerthana! It’s concerning to see the rapid rise in the misuse of AI in cybersecurity, especially with tools like WormGPT which enables cybercriminals to craft convincing phishing emails and malware. The fact that many CISOs are wary of AI highlights the risk AI presents when not managed carefully. What stands out is that, while AI offers defensive power, it’s equally potent in the hands of attackers. Establishing frameworks like the NIST AI Risk Management Framework as you mentioned and regular upskilling of cybersecurity professionals are some essential steps to mitigate these threats. Also, equipping security teams with AI tools to simulate attacks and strategize responses will be critical in staying a step ahead.
I completely agree that using AI tools to simulate attacks and simultaneously improve defensive strategies will be key for staying ahead in this ever-evolving field of technology. As you have highlighted, AI in cybersecurity is a double-edged sword. While Ai can strengthen many defenses, it can also empower attackers to execute more sophisticated and convincing attacks. As such, it is vital to upskill cybersecurity professionals to incorporate AI-based tools to strengthen their organization defenses and also ensure they’re prepared for AI-driven threats. Thank you for your comment Michael!
Really detailed post, Keerthana. I think AI is a huge topic and issue not just in cybersecurity but in all parts of life. Just look at our class for example where our Prof must tell us not to make these blog posts and comments using AI. With respect to cybersecurity, I really like the recommendations at the end. I think companies need to cautiously use and integrate AI into there processes because it a powerful tool that has huge benefits. But it’s a complicated tool and shouldn’t just be used without thought. That’s why I like the recommendation to adopted one of the AI frameworks, and to use AI for defense for fast detection and response. A big data privacy issue that most regular employees might not realise is when using ChatGPT and such they might be leaking client and customer sensitive data through theses AI models. Which first is not good for the clients and customers but second for the company could results in fines from breaking privacy laws.
This is a great post, it hits the nail on the head regarding the need to address AI threats. Understanding how malicious actors misuse AI, like in the case of WormGPT, is essential for us to develop effective defenses. Education and proactive strategies are essential for equipping cybersecurity teams to handle these evolving challenges.