top of page
OutSystems-business-transformation-with-gen-ai-ad-300x600.jpg
OutSystems-business-transformation-with-gen-ai-ad-728x90.jpg
TechNewsHub_Strip_v1.jpg

LATEST NEWS

Marijan Hassan - Tech Journalist

How OpenAI is using user prompts to identify and deter bad actors


In a new report analyzing emerging trends in AI and cybersecurity, OpenAI revealed that bad actors have been using ChatGPT prompts to research vulnerabilities and design malicious campaigns. However, the AI startup has been leveraging user prompts to get crucial insights into targeted systems and the tools being tested by cybercriminals.



OpenAI said it had disrupted 20 covert influence campaigns and networks that sought to use AI to spread discord or compromise systems. "These cases allow us to begin identifying the most common ways in which threat actors use AI to increase their efficiency or productivity," OpenAI explained.


Case 1: SweetSpecter’s attack

One case detailed in the report involved a suspected China-based adversary known as SweetSpecter, which launched a spear-phishing campaign targeting OpenAI and various government entities.


The group posed as a ChatGPT user seeking help with platform issues and then attached a malware-laden file to their emails. If opened, the attachment would have deployed malware called SugarGh0st RAT, which could give SweetSpecter control of the target machine, allowing it to execute commands, capture screenshots, and exfiltrate data.


Fortunately, OpenAI’s spam filter caught the malicious emails before they reached employees. OpenAI says it traced the attack back to SweetSpecter’s ChatGPT prompts which include:


Themes that government department employees would find interesting

Good names for attachments to avoid being blocked."


SweetSpecter also asked ChatGPT about "vulnerabilities" in various apps and "for help finding ways to exploit infrastructure belonging to a prominent car manufacturer," OpenAI said.


Case 2: CyberAv3ngers

Another notable case involved CyberAv3ngers, a group suspected to be linked to the Iranian armed forces and known for its attacks on critical infrastructure in the U.S., Israel, and Ireland. By monitoring the group’s ChatGPT activity, OpenAI was able to identify additional technologies and software that CyberAv3ngers might exploit in future attacks, including vulnerabilities in water, energy, and manufacturing systems.


OpenAI’s efforts also uncovered new activity from an Iranian threat actor group, STORM-0817. The group appeared to be using AI tools for the first time to enhance their reconnaissance and exploit development capabilities.


One of their ChatGPT prompts, for example, sought help with debugging code designed to scrape Instagram profiles, which OpenAI confirmed was being tested on an Iranian journalist critical of the government. By tracking these prompts, OpenAI was able to identify and disrupt STORM-0817’s efforts before they became fully operational.


AI and cybersecurity in a new era

OpenAI’s report shines a light on the dual-edged nature of AI in cybersecurity. While bad actors are using AI tools like ChatGPT to enhance their attacks, the same tools are providing unprecedented visibility into their tactics and strategies. This has allowed OpenAI to proactively disrupt cyber campaigns and alert relevant authorities before the threats fully materialize. This approach will serve as an example of how AI companies can mitigate the risks posed by cybercriminals leveraging these powerful tools.

Comentarios


wasabi.png
Gamma_300x600.jpg
paypal.png
bottom of page