ChatGPT potential for crime highlighted in cybersecurity warning

The Hague, Netherlands - ChatGPT and similar AI tech with the potential to revolutionize how work tasks are handled across various industries may also be making life easier for cybercriminals, according to cybercrime experts in Europe.

ChatGPT and similar AI technologies have considerable potential to make life easier for cybercriminals, according to cybercrime experts in Europe.
ChatGPT and similar AI technologies have considerable potential to make life easier for cybercriminals, according to cybercrime experts in Europe.  © 123rf/terovesalainen

"ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes," the European Union law enforcement agency Europol is warning.

"The ability of LLMs to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors."

Phishing attacks, which often see hackers posing as a trusted online contact like a bank or employer, rely on plausible texts like email greetings and phrasings to succeed.

Cybersecurity experts often warn users to be wary of grammar errors and other mistakes in emails, but the reliability of ChatGPT to deliver near-flawless texts could make it even easier for scammers to trick people into handing over login details and other sensitive information.

And they're not the only one worried about these developments, as an open letter signed by more than 1,000 big names in the tech industry shows.

"Grim outlook" for potential abuse of AI systems

Chatbots such as ChatGPT have a "dark side" ripe for criminal abuse, according to the EU law enforcement agency Europol.
Chatbots such as ChatGPT have a "dark side" ripe for criminal abuse, according to the EU law enforcement agency Europol.  © REUTERS

The potential exploitation of these types of AI systems by criminals provide a "grim outlook," Europol said, noting that chatbots such as ChatGPT have a "dark side" ripe for criminal abuse.

Europol said its Innovation Lab had organized workshops in response to the growing public attention given to ChatGPT to explore how criminals can abuse large language models (LLMs) and to assist investigators in their work.

The agency, based in The Hague, said it was focusing on three crime areas among many areas of concern: fraud and social engineering, disinformation, and cybercrime.

On disinformation, it said that ChatGPT "excels at producing authentic sounding text at speed and scale," making it the model ideal for propaganda and disinformation purposes.

And on cybercrime, Europol noted that ChatGPT was capable of producing code in a number of programming languages. "For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code," it said.

Cover photo: 123rf/terovesalainen

More on Tech: