For Schools

Your duty of care doesn't stop when staff open ChatGPT

Vireo Sentinel shows school leaders what staff share with AI tools, and catches student information before it leaves the browser. Built for independent, Catholic, and faith-based schools across Australia.

Built in Perth
Privacy by design
No identifiable student data on our servers

Your duty of care doesn't stop at the staff room door

Every staff member at your school has access to information about children. Names, learning support plans, behavioural notes, family circumstances, mental health observations, child protection records. Your duty of care covers what they do with that information, including when they paste it into ChatGPT or Claude to draft a report card, summarise a welfare meeting, or polish a parent email.

The teachers aren't being careless. They're being efficient. But each one of those moments is a potential breach of the Privacy Act, a gap against your Child Safe Standards obligations, and in some cases an issue under the Reportable Conduct Scheme.

Most schools have no visibility into any of it.

Six things happening at your school this term

Report cards

A teacher pastes observation notes into ChatGPT to draft a comment. Student names, learning difficulties, family circumstances, all sitting on OpenAI's servers.

Counselling notes

A school counsellor types up meeting notes about a student's mental health into Claude. Sensitive health information leaves the building in seconds.

Suspension and behaviour letters

An admin officer pastes a full incident report into an AI tool to soften the wording for parents. Names, dates, what happened, all transmitted to a third party.

Learning support and IEPs

A learning support coordinator pastes assessment results and diagnoses into AI to draft a plan. Health information patterns sit in the prompt history.

Parent communications

A deputy uses AI to draft a difficult message to a parent. The prompt includes custody details, financial hardship, and the child's behaviour history.

Mandatory reporting and child protection notes

A teacher uses AI to help structure a difficult report. The notes contain identifying information about a child at risk. This is the moment a privacy issue can become a child safety issue.

Blocking doesn't fix this. Policies on their own don't either.

Staff work from home, use personal devices, and access AI through every browser they open. Bans push usage underground. Fewer than one in ten employees have had any meaningful AI training.

78%
of staff use AI tools their employer hasn't approved
WalkMe, 2025
8.5%
of prompts to AI tools include sensitive data
Harmonic Security, 2025
60%
say unsanctioned AI is worth the risk to meet deadlines
BlackFog, 2026

This isn't generic corporate data. It's information about children.

Student records

Names, dates of birth, addresses, enrolment details, attendance, student IDs.

Learning and development

IEPs, learning difficulties, gifted and talented assessments, academic performance.

Behavioural and wellbeing

Incident reports, counselling notes, mental health observations, family circumstances.

Child protection

Mandatory reporting notes, at-risk assessments, welfare referrals, inter-agency correspondence.

Parent communications

Custody arrangements, financial hardship, medical disclosures.

Staff and administrative

HR records, performance reviews, payroll, board papers.

What the law and the standards already require

Australian schools already operate under a layered set of obligations covering how staff handle information about children. AI use intersects with most of them.

Privacy Act 1988 (as amended by the POLA Act 2024)

In effect now and through to 10 December 2026

Schools that meet the small business threshold are APP entities, and most independent and Catholic schools do. Three changes from the 2024 amendments matter most:

A statutory tort for serious invasion of privacy is in effect from 10 June 2025. Parents can bring an action on behalf of their children. Damages of up to $478,550 are available, and emotional distress alone is sufficient. This sits alongside, not inside, the OAIC's enforcement powers.

The OAIC's enforcement priorities for 2025 to 2026 specifically include scrutinising AI and biometric technologies. Civil penalties of up to $50 million apply for serious or repeated breaches.

The Children's Online Privacy Code is due 10 December 2026. AI tools used in educational settings will be caught by it.

Child Safe Standards

In effect now, every state and territory

Every school in Australia operates under a version of the Child Safe Standards. They explicitly cover information handling about children and the responsibility of staff to protect that information. Western Australia, Victoria, New South Wales, Queensland and Tasmania all have legislated standards with active oversight bodies. Demonstrating that staff handle student information appropriately, including in AI tools, supports compliance with Standard 7 and equivalent provisions in each jurisdiction.

Reportable Conduct Schemes

NSW, Victoria, ACT, in effect now

The Reportable Conduct Scheme places obligations on school heads to report certain conduct involving children. Mishandling information about a child, particularly in the context of welfare or protection matters, can intersect with the scheme. Schools need to be able to show how staff handle that information across every channel they use, including AI.

Australian Framework for Generative AI in Schools

Endorsed by Education Ministers, June 2024, updated 2025

The national framework explicitly requires schools to restrict the upload of personally identifiable information into generative AI tools. Six principles cover privacy, transparency, fairness, accountability, human oversight, and benefit to students. Vireo supports the privacy and accountability principles directly by detecting identifiable information before it reaches AI platforms and producing an evidence trail of how staff have engaged with safety prompts.

AITSL Australian Professional Standards for Teachers

In effect now, all teachers

Standard 7 (Engage Professionally with Colleagues, Parents and the Community) covers ethical conduct and confidentiality. Schools are increasingly being asked to demonstrate that staff understand how AI use intersects with their professional obligations. Vireo provides a record of how staff have been prompted around sensitive information and what choices they made.

State education department information handling policies

In effect now, varies by jurisdiction

State departments have their own information handling and acceptable use frameworks for schools they oversee. WA, Victoria, NSW and Queensland each have policies that touch on AI use in some form. Independent and Catholic schools in those states are not bound by the department policies but are increasingly expected to align with them.

What Vireo actually does for your school

See what's happening

Which AI tools your staff use, how often, and what categories of work go in. Patterns you didn't know existed, surfaced in a dashboard your principal and business manager can read.

Catch information before it leaves

Real-time detection of identifiable student information using 100+ detection patterns and entity recognition that runs locally in the browser. When a staff member is about to share something risky, they see a prompt. They can cancel, redact, edit, or override with a justification. Human in the loop, always.

Support your governance

Compliance-supporting reports mapped to the Privacy Act, the Australian Framework for Generative AI in Schools, and your Child Safe Standards obligations. Evidence for your board, your auditor, your insurer, and your registration body.

What we see, and what we never see

This is the question every school asks first. Here's the answer in plain language.

What Vireo sees

  • Metadata about each AI interaction: who, when, which platform, the risk score, and what category of information was detected
  • The fact that a staff member triggered a warning and what they chose to do
  • Aggregated usage patterns across your school

What Vireo never sees

  • The text of the prompt itself
  • The names, dates of birth, or other identifying details of any student
  • Any content that triggers redaction at the point of detection
  • Any content that the staff member cancels rather than submits

Detection happens inside the browser extension on the staff member's own computer. The prompt is analysed locally using 100+ detection patterns and entity recognition. Identifiable information is redacted at the source. Only metadata and risk scoring leaves the browser.

Your data lives in the Australian region by default, with hosting and backups in country. Only administrators you nominate can access your school's data. Vyklow Analytics staff have no routine access and require an audited support request. Data Processing Agreement available on request.

By design, no identifiable information about a student ever reaches our servers. Detection runs locally. Redaction happens before submission. Metadata is the only thing we hold. This is the architecture that lets us claim, defensibly, that Vireo supports your duty of care without becoming a new exposure of its own.

Built for how schools actually buy and run technology

Warns, doesn't block

Choices, not roadblocks. Staff stay in control and keep working.

Deploys in minutes

A browser extension. No agents, no proxies, no IT project, no department approval.

Privacy by design

Detection and redaction happen in the browser before anything reaches our servers.

Built for school budgets

Transparent pricing, no enterprise contract minimums, annual invoicing on request.

Pricing your business manager can actually read

Vireo's per-staff cost drops as your team grows. Annual invoicing is available for all school sizes, with payment terms to suit your finance cycle. See the full pricing page for details.

Get in touch about your school. A quick conversation to understand your size, your setup, and what pricing looks like for you.

Get in touch

See how Vireo could fit your school

A 30-minute conversation, no obligation. We'll show you what visibility looks like, how the regulatory frameworks line up against your existing policies, and what a deployment in your school would involve.

Frequently asked questions

Does Vireo Sentinel see the actual text of student data?

No. Detection runs locally in the browser. Only metadata and risk scores leave the browser. Identifiable student information is redacted at the source before anything reaches Vireo's servers.

How does Vireo help schools meet their duty of care around AI?

Vireo gives schools visibility into which AI tools staff use, catches identifiable student information before it leaves the browser, and generates compliance-supporting reports for boards, auditors, and registration bodies.

How long does it take to deploy Vireo in a school?

Under 10 minutes for the administrator. Staff install a browser extension in under 2 minutes each. No network changes, no IT project, no proxy configuration.