Spotlight Cybercrime Innovation

Fighting AI with AI

Easy to use and hard to detect, artificial intelligence is one of cybercriminals’ favourite new tools. But law enforcement also has AI in its toolkit and is increasingly using it to fight back.

With its ability to create deepfakes, speak and write fluently in multiple languages, replicate graphics or generate personalized information, AI is taking phishing to a new level. As well as creating highly realistic scam materials, it allows fraudsters to use polymorphic phishing, a technique that bypasses IT defence systems by rapidly resending emails with slight variations until they score a hit. AI also helps cybercriminals make their malware or data breach attacks faster and more efficient, allows them and other bad actors to contaminate the models used to train legitimate AIs with malicious content and raises challenges for the criminal justice system with its potential both for creating fake evidence and for portraying genuine evidence as fake.

Spotlight2-Innovation-AI 2.png

 

Putting AI on the right side of the law

The threats are significant, but law enforcement agencies are developing their own weapons to fight back, including through AI itself. INTERPOL, along with the police forces of its 196 member countries and expert public and private sector partners, is leading that response worldwide. Our Innovation Centre partnered with UNICRI to produce the Responsible AI Innovation in Law Enforcement Toolkit and we recently relaunched CyberEx, our international Cybercrime Expert Group, which brings together law enforcement agencies, academia and private sector cybersecurity specialists to explore how best to support our member countries on cybercrime. “The threats posed by AI are a key topic for CyberEx, but we are also focusing on the opportunities it creates for law enforcement,” says Pei Ling Li, Head of Cyber Strategy & Outreach, INTERPOL Cybercrime Directorate.  “As long as we work within a strict legal framework, there is nothing to stop law enforcement and other cyber defenders from using AI in similar – but legal – ways to the criminals,” she continues, “whether it’s to counter them by enhancing our investigational efficiency or luring them in with our own social engineering techniques”.

A Rapid response in Hong Kong

The Hong Kong Police Force is one of the law enforcement agencies that is doing just that, through HKPF’s Project Rapid, an initiative using AI to identify and take down phishing sites. “Previously we were only able to identify the criminal infrastructures behind phishing campaigns by following up on individual police reports,” says Horest Au Yeung, Hong Kong Police Force cybercrime officer and former Asia & South Pacific Desk Coordinator at INTERPOL’s Cybercrime Directorate, “but now we take a proactive approach to use AI to identify and analyze suspicious websites.  Even if the public provides us with just a single URL, AI allows us to dig into the internet much further and much faster to verify if it is scam related. We can search instantly to see if there are similar phishing sites out there, using lookalike visuals or hosted on the same server for example, or created on similar dates,” he continues. “We can then quickly report them to the internet service providers concerned and ask them to take down the sites and that means we can reduce the risk to the public.”

Spotlight2-Innovation-AI 1.png

Project Rapid is now set to broaden its impact across borders through Operation Rapid Strike. In its first iteration launched in April, the Hong Kong Police Force will send data retrieved during Project Rapid to INTERPOL’s global Cybercrime Intelligence Unit for analysis. It will then be turned into actionable intelligence and compiled into Cyber Activity Reports to be sent to any member countries where similar threats have been identified or may be emerging. This new AI-driven investigation approach may then be extended to other geographies.

Spotlight2-Innovation-AI-Main.png

New tools, same crimes

“We need to work closely with our member countries to track AI threats and adapt our investigation and evidence gathering techniques, but not become blinded by the technology,” says Rose Bernard, Coordinator Cybercrime Operations, INTERPOL Cybercrime Directorate. “We need to recognize that, operationally speaking, the crimes are not changing – the criminals are just using a new tool and in five years’ time they will be using another one. Some say battling cybercrime is a losing game for law enforcement, but I’m more optimistic,” she concludes. “We may not always win in the AI game but, along with our partners and our member countries, we can take action to make the criminals lose.”