AI Appears to Be Making Malware Hackers' Work Easier
By Anthony Burr | TH3FUS3 Managing Editor
October 4, 2024 08:39 AM
Reading time: 2 minutes, 8 seconds
TL;DR Malware developers leverage generative AI to expedite malicious code creation, increasing cyber-attacks. HP's Wolf Security team uncovered an AI-influenced AsyncRAT version, highlighting a worrying trend. This development reflects how AI is reshaping cybersecurity landscapes.
AI: A Double-Edged Sword in Cybersecurity
Malware developers are now harnessing the power of generative AI to write code faster, resulting in a surge of cyber attacks.
This technological shift is allowing almost anyone with technical skills to create malware.
In a September report, HP's Wolf Security team detailed their discovery of a new variation of the asynchronous remote access trojan (AsyncRAT), which is capable of remotely controlling a victim's computer.
The AsyncRAT Discovery
While AsyncRAT was originally developed by humans, this new version showed signs of AI involvement. This variation's injection method appeared to have been developed using generative AI.
Historically, researchers have come across generative AI used in 'phishing lures' or deceptive websites to scam victims.
However, the report noted that prior to this discovery, there was limited evidence of attackers actively using AI to write malicious code.
Indicators of AI-Created Code
The program exhibited several characteristics indicative of AI-generated code. Notably, nearly every function in the script was accompanied by a comment explaining its operation.
This level of documentation is unusual for cybercriminals, who generally prefer to make their code understandable to outsiders. Furthermore, the code's structure and choice of function names and variables strongly suggest AI development.
Unraveling the Malicious Script
The investigation began when an email was flagged to a subscriber of HP's Sure Click threat containment software. Masquerading as a French invoice, it likely targeted French speakers.
The team was initially unable to determine the file's purpose due to its encrypted code. Nevertheless, they successfully cracked the password, revealing the malware within.
The file contained a Visual Basic Script (VBScript) that manipulated the user's PC registry, installed a JavaScript file, and executed it. This chain of actions ultimately led to the installation of AsyncRAT malware.
"The activity shows how GenAI [generative AI] is accelerating attacks and lowering the bar for cybercriminals to infect endpoints," the HP report stated.
The Implications of AI in Cyber Threats
AsyncRAT, released on GitHub in 2019, is marketed as a legitimate open-source tool. However, cybercriminals predominantly use it because it allows them to remotely control infected computers, posing significant risks, particularly to crypto users.
This new AI-driven variation demonstrates that generative AI makes it easier for malware developers to execute attacks. Cybersecurity experts are grappling with its implications as AI technology continues to evolve.
Previous incidents have shown that AI can uncover vulnerabilities in smart contracts, which both ethical and unethical hackers can exploit.
Moreover, reports from companies like Meta warn of fake generative AI programs being used as lures, further complicating the security landscape.