News

ThreatModeler Dice Feature: ChatGPT Raises Cybersecurity and A.I. Concerns

  • 20 January 2023
  • 0 replies
  • 5 views
ThreatModeler Dice Feature: ChatGPT Raises Cybersecurity and A.I. Concerns
Userlevel 7
Badge

Since its release, ChatGPT, a chatbot capable of producing incredibly human-like text thanks to a sophisticated machine-learning model, has industry observers heralding a new stage in artificial intelligence (A.I.) development. ChatGPT’s ability to produce realistic conversations and messages—and adapt to its mistakes—could have applications in industries ranging from finance to art.  

 

In the three months since OpenAI announced ChatGPT, the chatbot has generated significant headlines and buzz, with as many as a million people testing out the tech soon after its release. Microsoft has made significant investments in the platform, with an eye toward possibly integrating it into its cloud services.

 

The excitement over ChatGPT, however, comes with a dark side, including concerns over security and the ability of cybercriminals to use the chatbot for their means. Check Point Research has documented several instances of threat actors deploying much more sophisticated phishing emails written with the help of the chatbot. Other threat actors are using the technology to create malware.

 

But the success of these cybercriminal experiments remains to be seen. “It’s still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web. However, the cybercriminal community has already shown significant interest and is jumping into this latest trend to generate malicious code,” the Check Point researchers note in their most recent report.

 

For cybersecurity experts, ChatGPT shows how A.I. is now moving toward mainstream use. The increasing use of these technologies also means that tech pros must start brushing up and reinforcing their skills around A.I. and cybersecurity. This is meant to ensure they are not only developing safer code as new applications are created, but also countering what threat actors are deploying using these same tools.

 

“Many organizations are not prepared for how this is going to change the threat landscape. You have to fight A.I. with A.I. and organizations can look for cloud security that also uses generative A.I. and A.I. augmentation technology,” Patrick Harr, CEO at security firm SlashNext, recently told Dice. “Using these technologies to predict millions of new variants of the threats that might enter the organization is the only way to counteract these attacks to close the security gap and vulnerabilities created by this dangerous trend.”

 

A.I. Brings Unique Security Risks

 

Even before the release of ChatGPT, advancements in A.I. were already disrupting the cybersecurity industry. With information security spending estimated to reach $187 billion this year, Gartner expects CISOs and other security leaders to spend more on A.I. technologies to protect against attacks that also utilize the technology.

 

At the same time, the research firm predicts that “all personnel hired for A.I. development and training work will have to demonstrate expertise in responsible A.I.”

 

These developments mean tech pros not only have to understand A.I. and how it’s used within an organization, but also how developments such as ChatGPT can be quickly adapted by attackers, said Mike Parkin, a senior technical engineer at Vulcan Cyber.

 

“If someone creates a new and useful tool, someone else will find a way to creatively abuse it. That’s what we’re seeing with ChatGPT now,” Parkin told Dice. “You can ask it to create some code that will perform a specific function, like export files and then encrypt them. You can ask it to obfuscate that code, and then give it to you as an embedded Excel macro. And it will.”

 

The more immediate threat, especially when it comes to A.I. technology like ChatGPT, is not that threat actors will use it to immediately create sophisticated malicious code, but rather deploy the chatbot to improve phishing emails—making these malicious messages more realistic and enticing for potential targets to open.

 

“Compared to your typical scammer’s writing, ChatGPT is Shakespeare,” Parkin added. “This is where I see a genuine threat in this technology. Truly innovative obscured code is something of an art, and ChatGPT’s not there yet. But for conversational situations, it’s already writing at a level above and beyond what a lot of threat actors are doing now.”

 

In its assessment of ChatGPT, security firm Tanium’s Threat Intelligence Team came to a similar conclusion: “The primary takeaway of ChatGPT isn’t its destructive nature, but rather the development of interfaces by cybercriminals to aid unskilled hackers in creating highly sophisticated campaigns of various types: text scams, phishing lures, [business email compromise] attacks, etc.”

 

Developing Skills to Respond

 

The increasing use of A.I. to create phishing emails demonstrates that tech pros not only need to brush up on their security skills, but they must ensure that their organization is training employees to help them spot the telltale signs of these types of attacks.

 

Cybercriminals tend to excel at using technology that helps take advantage of the skills gaps of their targets, which makes the organization much more vulnerable, noted Zane Bond, Head of Product at Keeper Security.

 

“The more realistic threat from these artificial intelligence tools is the opportunity for bad actors with limited resources or technical knowledge to attempt more of these attacks at scale,” Bond told Dice. “Not only can the tools help bad actors create content such as a believable phishing email or malicious code for a ransomware attack, but they can do so quickly and easily. The least-defended organizations will be more vulnerable as the volume of attacks will likely continue to increase.”

 

Organizations that have tech pros who understand A.I. and how to use it to automate the response to these attacks will have an advantage. This means opportunities for those professionals who understand the intersection of A.I. and cybersecurity.

 

“Vulnerability discovery, particularly malware or insider threat detection, has always been a cat-and-mouse game with adversaries because detective tools typically rely on signatures,” John Steven, CTO at ThreatModeler, told Dice. “As signatures become effective, adversaries adjust. Generative A.I. won’t change this equation, as it’s a weapon both sides can use within their current workflows. Adversaries will use these tools to create malware that evades detection. Defenders will use it to detect ‘conceptual’ signatures and their generated variants.”


0 replies

Be the first to reply!

Reply