The AI Security Balance: Criminal Exploitation vs. Defensive Innovation

The AI Security Balance: Criminal Exploitation vs. Defensive Innovation
TLDR
- Financial fraud losses driven by AI projected to hit $40 billion by 2027, up from $12.3 billion in 2023.
- Advanced AI tech is expected to trickle down from state actors to organised crime within 4-5 years.
- Defenders are countering with specialised AI like Google’s Sec-Gemini v1, improving threat intelligence analysis (+11%) and root cause identification (+10.5%).
- AI helps security teams close the attacker-defender asymmetry gap by working faster and analysing threats in near real-time.
As AI tools grow more capable and accessible, both attackers and defenders are finding new ways to harness their power. Recent research from The Alan Turing Institute on the criminal misuse of AI, paired with Google’s announcement of its latest defensive AI model, highlights the two sides of the AI security coin, revealing just how quickly things are shifting for both offense and defence.
How Criminals Are Leveraging AI
A new report, AI and Serious Online Crime, from the Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS), sheds light on how AI is transforming serious online crime. The goal? To provide an evidence-based understanding of how widespread AI adoption is reshaping criminal behaviour.
Here are some key takeaways on what’s accelerating criminal use of AI.
- Automation and scale – AI allows criminals to launch attacks at much greater volume through the automation of malicious activity.
- Enhancing existing crime models – GenAI is supercharging existing techniques across a wide range of use cases.
- Trickle-down of advanced capabilities – What starts with state actors often ends up in the hands of organised crime. The report predicts this tech transfer could take just 4–5 years.
- Exploiting human psychology – AI’s ability to produce convincing and manipulative content plays directly into our cognitive biases and vulnerabilities.
Selected threat vectors demonstrate these points:
- Financial fraud – AI is enabling more sophisticated scams, with global losses projected to hit $40 billion by 2027 — a sharp rise from $12.3 billion in 2023 (Deloitte projection cited in CETaS report).
- Deepfake-enabled scams – In one recent case, a Hong Kong employee was tricked into transferring £20 million after interacting with a highly realistic deepfake of their CFO that blended synthetic video and voice.
- LLM jailbreaking – Attackers are successfully bypassing safety measures in large language models, slashing malicious prompt rejection rates from 98% to just 2% in some tests cited by CETaS.
Criminal market dynamics are also changing:
- Open-weight AI systems with fewer guardrails are increasingly being exploited, particularly with innovation accelerating in places like China.
- Interviews suggest some criminal groups are now training their own large language models, deliberately removing safety restrictions to suit their needs. This escalating offensive capability underscores the urgent need for equally sophisticated defensive measures.
Defensive AI: Fighting Back
Thankfully, while criminals are innovating, so are defenders. Google’s announcement of Sec-Gemini v1 - an experimental cybersecurity-focused AI model - marks a significant step forward in defensive capabilities.
Addressing the Core Challenge: Asymmetry
Cybersecurity has always struggled with asymmetry: defenders must block every threat, while attackers only need to find one weak spot. As Google puts it, “This fundamental asymmetry has made securing systems extremely difficult, time-consuming, and error-prone.”
AI is now helping to close that gap by:
- Force multiplying cybersecurity professionals
- Delivering near real-time threat intelligence with advanced reasoning
- Streamlining complex tasks like root cause analysis
Real-World Improvements
Models like Sec-Gemini v1, integrating sources like Google Threat Intelligence (GTI) and the OSV open-source vulnerability database, are already showing tangible gains.
- 11% improvement in understanding threat intelligence (CTI-MCQ benchmark)
- 10.5% boost in identifying root causes of vulnerabilities (CTI-RCM benchmark)
- Better integration and contextualisation of diverse threat intelligence sources
Looking Ahead
As the CETaS report warns, “We are already seeing a major shift in AI-enabled criminality” - and it could escalate quickly without effective countermeasures. But on the flip side, defensive AI models like Sec-Gemini v1 show how the same innovations can be turned into powerful tools for good.
The arms race between offensive and defensive AI is well underway, creating a continuous cycle of innovation and response. Staying ahead will require constant investment, collaboration, and adaptation - not just in technology, but in the people and policies that shape how we use it.