Nation-State Hackers Use AI to Boost Cyber Operations
Nation-state hackers are using artificial intelligence to refine their cyberattacks, according to a report published by Microsoft Corp. on Wednesday.
Russian, North Korean, Iranian and Chinese-backed adversaries were detected adding large-language models, like OpenAI’s ChatGPT, to their toolkit, often in the preliminary stages of their hacking operations, researchers found. Some groups were using the technology to improve their phishing emails, gather information on vulnerabilities and troubleshoot their own technical issues, according to to the findings.
It’s the largest indication yet that state-sponsored cyber-espionage groups, which have haunted businesses and governments for years, are improving their tactics based on publicly available technologies, like large language models. Security experts have warned that such an evolution would help hackers gather more intelligence, boost their credibility when trying to dupe targets and more rapidly breach victim networks. OpenAI said Wednesday it had terminated accounts associated with state-sponsored hackers.
“Threat actors, like defenders, are looking at AI, including LLMs, to enhance their productivity and take advantage of accessible platforms that could advance their objectives and attack techniques,” Microsoft said in the report.
No significant attacks have included the use of LLM technology, according to the company. Policy researchers in January 2023 warned that hackers and other bad actors online would find ways to misuse emerging AI technology, including to help write malicious code or spread influence operations.
Microsoft has invested $13 billion in OpenAI, the buzzy startup behind ChatGPT.
Hacking groups that have used AI in their cyber operations included Forest Blizzard, which Microsoft says is linked to the Russian government. North Korea’s Velvet Chollima group, which has impersonated non-governmental organizations to spy on victims, and China’s Charcoal Typhoon hackers, who focus primarily on Taiwan and Thailand, have also utilized such technology, Microsoft said. An Iranian group linked to the country’s Islamic Revolutionary Guard has leveraged LLMs by creating deceptive emails, one which was used to lure famous feminists and another that masqueraded as an international development agency.
Microsoft’s findings come amid growing concerns from experts and the public about the grave risks AI could pose to the world, including disinformation and job loss. In March 2023, over a thousand people, including prominent leaders of major tech companies, signed an open letter warning about the risks AI could have to society. More than 33,000 people have signed the letter.
Another suspected Russian hacking group, known as Midnight Blizzard, previously compromised emails from Microsoft executives and members of the cybersecurity staff, the company said in January.
(Updated to include additional context throughout. A previous version of this story was updated to correct a reference in seventh paragraph to Forest Blizzard group.)
Top photo: Russian Cyrillic Keyboard. Moscow, Russia, March 10, 2019.