Microsoft claims Russia, China and others used OpenAI’s tools for hacking

Innovation

State-sponsored hacking groups from Russia, China and other U.S. adversaries have been caught using OpenAI’s tools to better attack their targets, according to a report published Wednesday by Microsoft, amid concerns of possible cybersecurity threats as AI technology improves.
The OpenAI logo is being displayed on a smartphone screen in Athens, Greece, on January 22, 2024. (Photo illustration by Nikolas Kokovlis/NurPhoto via Getty Images)

Key Facts

OpenAI and Microsoft disabled accounts associated with the hacking groups Charcoal Typhoon, Salmon Typhoon, Crimson Sandstorm, Emerald Sleet and Forest Blizzard, according to reports from both companies.

The China-backed groups Charcoal Typhoon and Salmon Typhoon used OpenAI’s language models to improve on their “technical operations,” Microsoft alleges, including research for cybersecurity tools and phishing content.

Forest Blizzard, a hacker group allegedly tied to Russia’s military intelligence, used language models to research “various satellite and radar technologies,” which “may pertain to conventional military operations in Ukraine,” Microsoft claims.

Hackers from North Korea associated with the Emerald Sleet group generated content that would “likely be for use in spear-phishing campaigns” against regional experts, while Crimson Sandstorm—allegedly tied to Iran’s Revolutionary Guard—used OpenaI’s tools to help write phishing emails, according to Microsoft’s report.

Liu Pengyu, spokesperson for China’s U.S. embassy, told Reuters China has denied “groundless smears and accusations” against the country, which supports the “safe, reliable and controllable” use of AI technology to “enhance the common well-being of all mankind.”

Both Microsoft and OpenAI said they would improve their approach to combatting state-sponsored hacking groups using their tools, including investment in monitoring technology to identify threats, collaboration with other AI firms and being more transparent about possible safety issues linked to AI.

Crucial Quote

Tom Burt, head of Microsoft’s cybersecurity, told the New York Times the groups were using OpenAI’s tools for simple tasks: “They’re just using it like everyone else is, to try to be more productive in what they’re doing.”

Surprising Fact

Microsoft claimed last month the company’s corporate systems were attacked by the Russian-backed hacker group Midnight Blizzard. The group accessed a “very small percentage” of the company’s corporate email accounts, including some senior leadership and employees from its cybersecurity and legal teams, Microsoft said.

Key Background

Microsoft has released several reports over the last year about state-sponsored hacking efforts. Last year, Microsoft claimed a “China-based actor” breached the email accounts for about 25 U.S.-based government organizations. The company also said it uncovered infrastructure hacking activity by the Chinese hacker Volt Typhoon, including attacks on U.S. military infrastructure in Guam. Sami Khoury, Canada’s top cybersecurity official, told Reuters that evidence obtained by the Canadian government suggested more hackers were using AI to improve their attacks, develop malicious software and create more convincing phishing emails. Khoury’s warning followed a report by the European police organization Europol, which said tools similar to OpenAI’s ChatGPT made it possible “to impersonate an organization or individual in a highly realistic manner.” The U.K.’s National Cyber Security Centre also warned about the possible hacking risks through AI use, suggesting language models could “help with cyber attacks beyond their current capabilities.”

This article was first published on forbes.com and all figures are in USD.

Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here or become a member here.