Back

How AI is shifting the paradigm to give more power to threat actors

banner-abstract-blog-cube-6

In today's fast-paced world, attackers are actively leveraging emerging technologies to outsmart defense teams and identify new points of vulnerability. That's why it's no surprise that artificial intelligence (AI) has become a significant player in the ever-evolving threat landscape. 

With AI software appearing on people’s desktops at lightning speed thanks to ChatGPT, awareness is being heightened on both sides of the cyber landscape. While AI technologies can potentially make the lives of security teams easier and help greatly in the fields of threat analysis and automation, among other things, it also gives way to a new breed of malware and other attacks. 

Gone are the days of setting up intricate remote infrastructures or purchasing overpriced Malware-as-a-Service (MaaS) tools to expedite the process of coordinating and implementing attacks – AI attack mechanisms are here to stay.  

Gone are the days of setting up intricate remote infrastructures or purchasing overpriced Malware-as-a-Service (MaaS) tools to expedite the process of coordinating and implementing attacks – AI attack mechanisms are here to stay.  

AI-driven malware is taking a seat at the table 

Using AI mechanisms, attackers have learned to execute known and highly detectable malicious attack mechanisms rapidly, which allows the malware to bypass many modern security defenses. When these mechanisms are used in an unexpected combination exploitatively, security platforms are unable to recognize the malicious threat pattern. With AI behind the wheel, the problem is exacerbated – how do you predict what an AI-driven malware might do next? 

With AI behind the wheel, the problem is exacerbated – how do you predict what an AI-driven malware might do next? 

Recent events attest to the advancements of these malware types in the public sphere.  Although these types of technologies were put on the map in 2018, when a Proof of Concept (PoC) AI-powered malware named DeepLocker was showcased at a conference, a significant increase in AI technologies have been deployed for malicious intent has taken place in 2023.  

Earlier this year, researchers observed a new campaign - identifying an infostealer that leverages increasing public interest in AI to lure unsuspecting victims to download a malicious Google Chrome extension. The malicious extension was then utilized to deploy malware that siphoned access credentials to Facebook business accounts. This campaign, coupled with other AI-themed phishing campaigns, is evidence of attackers’ ability to leverage AI indirectly.   

BlackMamba: Exploring the potential impact of an AI attack 

BlackMamba – an AI-related Proof of Concept (PoC) created by a security team to explore and illustrate the potential impact of generative AI was first shared with the public in March 2023. BlackMamba seeks to illustrate just how devastating things can get once attackers begin to incorporate AI into their attacks.  

Here are some of BlackMamba’s most-used tactics:  

  • Natural Language Processing (NLPs): By utilizing Natural Language Processing (NPL) models, the team behind BlackMamba managed to make command-and-control (C2) servers obsolete.     

  • Automations: The BlackMamba experts found that the malware could be equipped with intelligent automation that could push back any attacker-bound data through a benign communication channel.  

  • Generative AI: BlackMamba employed AI code generative techniques that could synthesize new malware variants, changing the code to make it available to evade detection algorithms.



AI-powered malware doesn’t exist in the wild – yet  

Some attackers might find AI to be better suited not as the primary engine for their malware, but rather as a tool to improve an existing strain. This type of innovative thinking could well be game-changing, as TTPs – the Tactics, Techniques, and Procedures used by specific attackers – could no longer be a reliable parameter for defending teams. Furthermore, incorporating AI into attacks could make it so these attacks will be executed faster than ever. 

That being said, AI-powered malware has yet to make an appearance in the wild, giving the cybersecurity sphere some breathing room until it eventually makes its grand entrance. It is important to note, however, that malware that has not yet been observed may very well already exist in the wild.  

AI-powered malware has yet to make an appearance in the wild, giving the cybersecurity sphere some breathing room until it eventually makes its grand entrance.

AI-powered malware specifically differs from the strains we’re familiar with, as it is trained to adapt itself based on the scenario at hand and target its victims in a more meticulous manner, making it much harder to detect. 

Building up cyber resilience  

While attackers haven’t quite expanded into AI territory, the groundwork is slowly but surely being laid to accommodate those who seek to venture there in the near future. By incorporating such technology, malware evasion capabilities will see a significant boost as C2 infrastructure will be made obsolete and the malware’s behavior will become too unpredictable for current defense systems to handle. 

While attackers haven’t quite expanded into AI territory, the groundwork is slowly but surely being laid to accommodate those who seek to venture there in the near future.

It is yet to be determined whether this emerging technology will have the necessary power to shift the paradigm and allow attackers to succeed while defense teams struggle to make good use of it. To ensure that your organization is best protected, enriching your security teams with knowledge and awareness of such attacks is key to establishing a foundation of safety amidst this currently uncharted territory.  

To learn more about how CyberProof’s experts are integrating AI into cybersecurity defenses, contact us

The_road_to_10x_improvement_in_security_operations_with_GenAI-_Blog_Banner_

Our newsletter is only one click away!

Topics