Microsoft and OpenAI have released a report that looks into how hackers are using large language models (LLM) like ChatGPT to refine and improve their cyberattacks.
The research claims to have detected attempts from state-backed threat actors who are enhancing their attack methods through data selection, file manipulation, and multiprocessing.
This report comes following last month’s news that Microsoft’s systems had been breached by a Russian-backed hacking group, compromising the email accounts of its senior leadership team.
What Has the Report Found?
The report from Microsoft, a collaboration with its commercial partner OpenAI, aimed to research how they could ensure generative AI technologies could be used safely and responsibly.
In it, Microsoft revealed how a number of adversaries are implementing AI functionality in their tactics, techniques, and procedures. These were determined to have come from Russia, North Korea, Iran, and China.
🔎 Want to browse the web privately? 🌎 Or appear as if you're in another country?
Get a huge 86% off Surfshark with this special tech.co offer.
An example of a cyberattack referenced in the report was by the Strontium group, who are linked to Russian military intelligence and considered a “highly effective threat actor.” It was found that thanks to AI assistance, they were able to undertake LLM-informed reconnaissance and LLM-enhanced scripting techniques.
Put simply, LLM-informed reconnaissance means using generative AI to understand satellite communication protocols and radar imaging tools. This then allows for better knowledge and valuable insights on potential targets.
Similarly, LLM-enhanced scripting techniques mean using AI models to create code snippets that can then perform functions during an attack.
Such use of LLMs follow a worrying broader trend of cyber criminals using generative AI in order to produce code that disables antivirus systems and delete directory files, in order to avoid the data breach being flagged as an anomaly.
The Sectors at Risk
Several threat actors were named in the research, with groups such as Strontium, Charcoal Typhoon, and Salmon Typhoon referenced as targeting a variety of sectors. These include:
- Defense
- Energy
- Government
- Non-governmental organizations (NGOs)
- Oil and gas
- Technology
- Transportation and logistics
Interestingly, Salmon Typhoon — also known as Sodium — was already known to have had a history of launching attacks against the US defense sector.
Specific regions were also noted as being a target for different groups, with Charcoal Typhoon — also known as Chromium — focusing on organizations in Taiwan, Thailand, Mongolia, Malaysia, France, and Nepal.
How Microsoft Is Fighting Back
The most obvious solution to respond to AI attacks is by using AI right back, according to the report.
“AI can help attackers bring more sophistication to their attacks, and they have resources to throw at it. We’ve seen this with the 300+ threat actors Microsoft tracks, and we use AI to protect, detect, and respond.” – Homa Hayatyfar, principle detection analytics manager for Microsoft
Under this strategy, Microsoft is building a new GPT-4 AI assistant, Security Copilot. The tool is being designed to better detect cyberthreats and security risks before they cause harm, by summarizing vast data signals. It will then be able to reinforce the appropriate security accordingly.
The technology giant is also in the process of overhauling security standards to legacy systems, following last month’s breach of executives’ emails, as well as attacks on Azure cloud.