Nearly half of New Zealand businesses now say their biggest AI cyber risk is their own staff accidentally exposing data through AI tools. A quarter list improper AI use as a top cyber security challenge, up from 16% last year. Shadow AI is insider risk wearing a productivity hat, and directors across the Tasman are starting to treat it that way.
Key Takeaways
- 43% of NZ businesses say employees accidentally exposing data through AI is their biggest cyber risk, the top concern “by quite a margin”
- 24% now name improper AI use as a top cyber security challenge, up from 16% the previous year
- AI-related cyber incidents more than doubled from 6% to 14% in twelve months
- Many organisations rolling out sanctioned AI tools still lack sufficient security governance
The report
Kordia is a New Zealand state-owned enterprise that operates network infrastructure and, through its subsidiary Aura Information Security, provides cyber advisory and incident response services. Its annual New Zealand Business Cyber Security Report has run for a decade and surveys nearly 250 businesses with 50 or more employees. It carries weight with boards in the region.
The 2026 edition, published 9 March, shifts the conversation from external attackers to internal AI misuse.
In 2025, Kordia’s incident data showed AI-related vulnerabilities accounted for 6% of reported cyber-attacks. In 2026, that figure hit 14%, more than double in twelve months. At the same time, the share of businesses naming improper AI use as a top-three cyber challenge jumped from 16% to 24%.
The traditional external threat picture actually improved slightly. The proportion of businesses reporting a cyber-attack dropped from 59% to 44%, broadly consistent with New Zealand’s National Cyber Security Centre data showing a decline from 7,122 incidents to 5,995. But within that smaller pool, the financial consequences got worse. Financial extortion rose from 14% to 19% of incidents. Among businesses that received a ransom demand, 42% paid it. And 32% of all businesses surveyed said they would consider paying.
The headline finding was none of those. It was about staff and AI.
The insider risk nobody planned for
Patrick Sharp, General Manager of Aura Information Security, frames it bluntly. Staff are “copying confidential data into AI systems, information they would never put into Google, without understanding the risks and without guidance from their organisation.”
When 43% of surveyed businesses say employees accidentally exposing data or AI-driven processes is the biggest cyber risk facing their business, and when that answer outranks every other category “by quite a margin” according to Kordia, shadow AI has moved from an IT irritation to a board-level governance problem.
Sharp describes shadow AI as “the unauthorised use of AI tools by employees” and calls it a “massive problem.” The people creating the risk aren’t malicious. They’re trying to work faster. They open a browser tab, paste a client contract into ChatGPT, get a summary back, and move on with their day. There’s no log of it happening, no audit trail, and no way to find out until something goes wrong.
Many organisations have responded by rolling out sanctioned AI tools. Kordia’s report notes that this hasn’t solved the problem, because many deployments lack “sufficient security governance and practices.” The line between sanctioned and shadow AI gets blurry when approved tools have no guardrails either.
What we see at this scale
We’ve been monitoring AI usage at a smaller company for several months now. Around 30 people, more than 3,700 prompts across four platforms, only one of which was officially approved.
38% of interactions contained something worth flagging. Personal information, credentials, financial data, client names. Not because people were being careless. They just didn’t realise what they were sharing.
295 high-risk prompts went in. 12 came out the other side without being stopped or modified. That’s a 96% intervention rate at the point of entry, before sensitive data reaches an external AI platform.
The Kordia findings track with what we see at this scale. The gap isn’t awareness of AI. Everyone knows their teams use it. The gap is visibility into what data is going where.
The difference between a dashboard alert and a forensics bill is visibility at the point of entry. Kordia’s data just put 250 businesses worth of survey evidence behind that.
What businesses should do now
Map and monitor shadow AI. Discover which AI tools staff actually use. Log outbound AI traffic where you can. If you treat unsanctioned SaaS as a risk, unsanctioned AI should get the same scrutiny, or more. Tools like Vireo Sentinel can identify usage across ChatGPT, Claude, Perplexity, and Gemini with a browser extension that deploys in minutes.
Tighten acceptable-use policies with specific examples. Generic “be careful with AI” guidance doesn’t work. Staff need scenario-based rules. Tell them what they can’t paste: client records, internal pricing, source code, employee data. Tell them what’s fine: publicly available information, generic drafts, brainstorming. Make the boundary clear enough that a new hire understands it on day one.
Update training for AI-driven social engineering. The same Kordia report flags AI-enhanced phishing, deepfake voice calls, and prompt-based scams. Staff training built around spotting dodgy emails from 2019 doesn’t cover a cloned voice on a Teams call asking for a wire transfer. Retrain for the attacks that exist now.
Strengthen identity and data controls. Enforce phishing-resistant MFA. Implement data classification so you can apply policy at the data layer regardless of which AI tool an employee opens. If your data is classified and your endpoints know it, enforcement becomes possible.
Add AI misuse to insider-threat playbooks. Pasting a client’s commercial terms into ChatGPT before a bid submission is a data exfiltration event. It deserves the same detection, investigation, and response framework as any other insider risk scenario. The intent is different from a disgruntled employee downloading a customer database, but sensitive data left your control either way.
This is what we built Vireo Sentinel to do. Browser extension, deploys in minutes, monitors AI usage across ChatGPT, Claude, Perplexity, and Gemini with 100+ detection patterns. Your team keeps using AI. You get visibility into what’s being shared and evidence you can show an auditor. Free 14-day trial, no credit card required at vireosentinel.com.
Where this is heading
New Zealand is an early signal, not an outlier. If directors there are already naming shadow AI as their biggest AI cyber risk, boards in Australia, the UK, and elsewhere won’t be far behind. The regulatory runway supports this. The EU AI Act’s first enforcement provisions hit in August 2026. Australia’s Privacy Act reforms land in December 2026. Both will require organisations to demonstrate how they manage AI-related data risk.
The businesses that build an AI-aware insider threat programme now, covering policy, monitoring, and response, will be able to answer the question when it comes from an auditor, an insurer, or a client’s procurement team.
The ones that wait for an incident will find out what Deloitte already discovered. You can’t fix this with an email.
Sources
- Kordia: Biggest AI cyber threat may be coming from inside your business, 9 March 2026
- Reseller News: NZ businesses facing increased cyber risk from AI, 9 March 2026
- Kordia: Threat of AI cyber-attacks a top concern for NZ businesses, 9 March 2025 (prior-year baseline)
- Institute of Directors NZ: Kordia report shows AI cyberattacks front of mind, March 2025