CyberCX released their 2026 Threat Report on 3 March. Buried in a report mostly about ransomware and cyber extortion is a finding that matters.
Key Takeaways
- CyberCX's forensics team responded to AI data spill incidents for the first time in 2025
- Affected organisations had no DLP controls, no enterprise AI licensing, and no network logging - they could not even quantify what was shared
- EU AI Act compliance deadline is 2 August 2026. Australia's Privacy Act disclosure requirements start 10 December 2026
- Small businesses (11-50 employees) have the highest shadow AI risk at 27% unsanctioned tool usage
Their digital forensics and incident response team was called in for AI data spill incidents for the first time. Staff uploading sensitive corporate data to external AI platforms, these are real incidents with real forensics engagements.
Hamish Krebs, their Global Executive Director of Digital Forensics and Incident Response, put it plainly: “2025 was the first year that CyberCX’s DFIR team was engaged for these types of AI data spill incidents, and likely reflects the continued surge in AI adoption by many organisations and individuals.”
The report goes further. In many of these incidents, the organisation did not have enterprise licensing for the AI platform, data loss prevention controls in place, or adequate network logging. That made it impossible to identify or quantify the data spillage.
Read that again. Not just “data was shared.” The organisations could not even work out what was shared or how much.
This is CyberCX, now part of Accenture, with 1,400 cyber security professionals across Australia and New Zealand. When they say they responded to shadow AI incidents, that is not marketing. That is their incident response team writing it up in a forensics report. And their recommendation? Continuous monitoring and clear data governance frameworks.
From email to forensics
Last week, Deloitte told 470,000 staff to stop putting confidential data into ChatGPT. They have a partnership with Anthropic, approved internal tools, and training programs. Staff are still using public AI because they reckon it works better. Deloitte’s fix was an email.
This week, CyberCX’s forensics team confirmed they got called in to clean up after it happened.
That is the escalation. Deloitte is “we caught staff doing it.” CyberCX is “we got called in after it caused a data spill.”
Up until now, the shadow AI conversation has been about risk, looking at statistics, survey data, and projections.
The DTEX/Ponemon 2026 Insider Risk Report puts the cost of insider incidents at US $19.5 million per year, up 20% in two years, with shadow AI named as the key driver. Netskope’s 2026 data shows 223 AI-related data incidents per month per organisation. IBM says shadow AI breaches cost US $670,000 more than standard incidents.
Those are big numbers, but they are still abstract for most SMEs. “Data incidents per month” does not hit the same way as “we had to call in a forensics team because someone pasted the wrong thing into ChatGPT.”
That changes things, the report even flags it under its own heading: “the more immediate AI risk might be internal.” Shadow AI is no longer a risk category in a survey, it is an incident type in a forensics report. Companies are paying forensics teams to clean up after employees used AI tools without anyone watching.
What the incident response gap looks like
Catching a problem before it happens and calling in forensics after it happens are two very different price tags.
The CyberCX report spells it out, the organisations that experienced these data spills had no enterprise AI licensing, no DLP controls, and no network logging. They could not identify or quantify what was shared, by the time anyone noticed, the data was already in a public AI platform and the forensics question became: what was shared, when, and with which platform? And they could not answer it.
That is an expensive question to answer after the fact. CyberCX does not publish their incident response rates, but industry benchmarks for forensic investigations start in the tens of thousands and go up fast.
CyberCX’s own recommendation is continuous monitoring and clear data governance frameworks, that is what visibility tools are built for.
Through Vireo Sentinel, we see the patterns that lead to these incidents every day. Employees pasting client data, financial records, credentials, and proprietary code into ChatGPT, Claude, Gemini, and Perplexity. In one deployment of around 30 people over a few months, 38% of AI interactions contained something worth flagging. 295 high-risk prompts went in, 12 came out the other side. Not because people were reckless. They just did not realise what they were sharing.
That is the gap. Organisations without visibility are flying blind until the forensics bill arrives. Organisations with visibility catch it at the point of entry and never need the call.
And compliance deadlines are tightening
The CyberCX report landed the same week compliance deadlines are getting closer in the EU and Australia.
The EU AI Act applies from 2 August 2026. High-risk AI systems (including recruitment, credit scoring, and customer service) must be fully compliant, with fines up to EUR 15 million or 3% of global turnover for non-compliance.
Australia’s Privacy Act now requires disclosure of automated decision-making, including AI-enabled systems, in privacy policies from 10 December 2026. Serious or repeated interference with privacy under the Act can trigger civil penalties up to AUD $50 million, and a shadow AI data spill involving personal information could land squarely in that category.
An Okta survey of Australian security and tech leaders found 41% say nobody owns AI security risk in their organisation, and 35% named shadow AI as their top AI security blind spot.
If a regulator asks how your organisation governs AI tool usage, and your answer is “we do not know what our team uses,” that is a problem regardless of whether a data spill has actually occurred. The CyberCX report just made it harder to argue that shadow AI data spills are unlikely.
What this means for smaller businesses
Here’s the bit that should worry SMEs.
CyberCX works with mid-market and enterprise organisations. These are companies with security teams, budgets, and existing controls. If they are experiencing shadow AI data spills, what does that look like in a 20-person accounting firm, a 50-person engineering consultancy, or a 150-person agency?
The answer: the same problem, fewer resources to detect it, and less budget to call in forensics after the fact.
The DTEX/Ponemon research found that only 18% of organisations have properly built AI governance into their risk programs. And the Reco data shows small businesses with 11-50 employees have the highest shadow AI risk, with 27% of employees using unsanctioned AI tools.
The fix is straightforward: visibility first. Know which AI tools your team uses, what data they share, and how often. That does not require a six-figure security platform or a three-month deployment.
CyberCX recommends network, endpoint, and DLP controls layered together. That is solid advice if you have the team and budget. Most SMEs do not. A browser extension that catches sensitive data at the point of entry covers the biggest gap without a three-month infrastructure project.
The question every business owner should ask this week
Deloitte sent an email. CyberCX sent a forensics team. Both were responding to the same problem: staff sharing sensitive data with public AI tools.
The question is not whether your team uses AI tools without oversight. The research says they almost certainly do. The question is whether you find out through a dashboard or through a forensics report.
Visibility at the point of entry is cheaper than incident response. It always has been.