Your recruiters use AI to screen CVs, write job ads, and brief clients. Do you know what candidate data they're sharing?
Consultants paste full CVs into ChatGPT for summaries, resourcers screen candidate profiles with Claude, and managers draft shortlists with Gemini. Vireo Sentinel shows you what's happening and catches candidate details before they reach external services.
What's actually happening
CV screening
A resourcer pastes 15 full CVs into ChatGPT to rank candidates for a shortlist. Names, addresses, employment histories, qualifications, and salary expectations now on OpenAI's servers.
Interview debrief
A consultant types detailed interview notes into Claude to draft client feedback. Performance observations, salary expectations, notice periods, and reasons for leaving on a third-party platform.
Client job brief
A manager drops a confidential brief into Gemini to write the job advertisement. Company name, team structure, reporting lines, budget range, and reasons for the vacancy exposed.
Candidates trust you with their careers. That trust deserves more than a terms-of-service checkbox.
Your agency processes thousands of personal records. Every CV, interview note, and reference check is someone's personal information flowing through your team.
See what your team shares with AI
One leaked client brief or candidate profile costs more than a year of governance.
Start freeThe data at risk in every placement
Agencies hold highly personal records across candidates, clients, and their own workforce. Here's what flows into AI platforms unchecked.
Candidate personal details
Full names, addresses, dates of birth, phone numbers, email addresses, nationality, visa status, photo IDs.
Employment and career history
Current and previous employers, job titles, responsibilities, reasons for leaving, performance details, reference contacts.
Salary and financial information
Current salary, bonus structures, salary expectations, benefits packages, contractor rates, fee agreements.
Interview and assessment records
Interview notes, psychometric results, skills assessments, behavioural observations, reference check outcomes.
Client confidential information
Company names, team structures, hiring budgets, replacement reasons, internal politics, strategic hiring plans.
Health and diversity disclosures
Disability disclosures, diversity information, reasonable adjustment requests, parental leave plans, medical clearances.
Privacy regulators are already watching recruitment AI
The ICO audited AI hiring tools in 2024 and made nearly 300 compliance recommendations. This is one of the first industries where regulators are actively investigating how AI handles personal information.
Privacy Act and employment regulations
Privacy Act 1988 (POLA Act 2024)
Statutory tort from 10 June 2025
Candidates can pursue damages capped at $478,550 for serious privacy invasions. The OAIC can pursue civil penalties up to $50 million for serious breaches. Enforcement priorities for 2025-26 explicitly include AI-related privacy practices.
Australian Privacy Principle 6 (Use and disclosure)
In effect now
Personal information collected for recruitment can only be used for that purpose unless consent is given. Pasting candidate details into a general-purpose AI platform likely exceeds the original collection purpose.
Anti-discrimination legislation
In effect now
If AI influences hiring decisions and produces discriminatory outcomes, the agency and client face liability under federal and state anti-discrimination laws. Using AI without understanding its decision-making process creates risk.
Employee records exemption limitations
In effect now
The employee records exemption under the Privacy Act only applies to current employees of the collecting organisation. It does not cover candidate records held by recruitment agencies, which remain fully subject to the APPs.
UK GDPR, Equality Act, and ICO enforcement
ICO AI recruitment audit findings
Published November 2024
The ICO audited multiple AI recruitment tool providers and found systems collecting far more personal information than necessary and retaining it indefinitely. Nearly 300 recommendations were issued, all accepted or partially accepted.
UK GDPR and Data Protection Act 2018
In effect now
DPIAs required before deploying new technology processing personal information. Candidate records shared with AI services require a lawful basis. ICO fines up to 17.5 million GBP or 4% of global turnover.
Data (Use and Access) Act 2025
Part 5 commenced 5 February 2026
Modernises UK data protection post-Brexit. Clarifies lawful basis for processing applicant information and introduces refined requirements for automated decision-making in recruitment contexts.
Equality Act 2010
In effect now
If AI screening produces discriminatory outcomes, the agency and client face liability. The EU AI Act classifies recruitment AI as high-risk, which influences UK best practice even without direct applicability.
EU AI Act, GDPR, and employment law
EU AI Act: recruitment classified as high-risk
August 2026
AI systems used in recruitment and selection are explicitly classified as high-risk under Annex III. This triggers mandatory requirements for documentation, logging, human oversight, transparency, and risk assessments.
GDPR data minimisation
In effect now
Sending candidate personal information to AI services beyond what's strictly necessary is a data minimisation violation. The ICO audit found recruitment tools routinely breaching this principle. Fines up to 20 million EUR or 4% of global turnover.
AI Literacy requirements
February 2025
Organisations must ensure staff have sufficient AI literacy. Recruitment agencies need to show their people understand the risks of sharing candidate details with these platforms.
EU AI Act penalties
August 2026
Up to 15 million EUR or 3% of global turnover for non-compliance with high-risk requirements. Recruitment is one of the specifically named high-risk use cases.
Federal and state employment regulations
NYC Local Law 144 (automated employment decision tools)
In effect now
Requires bias audits for automated employment decision tools used in New York City. Sets a precedent for AI governance in recruitment that other jurisdictions are following.
EEOC AI guidance
In effect now
The EEOC has issued guidance on AI and algorithmic fairness in employment decisions. Agencies using AI that produces disparate impact face liability under Title VII of the Civil Rights Act.
State privacy laws
Varies by state
California CCPA/CPRA gives candidates rights over their personal information. Colorado AI Act (effective June 2026) includes specific provisions for high-risk AI in employment. Illinois BIPA applies to biometric data from video interviews.
FTC enforcement
Active enforcement
The FTC has pursued enforcement actions against companies mishandling personal records in AI systems. Recruitment platforms handling large volumes of candidate information are within scope.
How Vireo Sentinel helps recruitment agencies
See what's happening
Which platforms your consultants use, how often, and what type of work goes in. Spot the consultant running 50 CVs through ChatGPT before a candidate or client raises it.
Catch candidate details before they leave
Real-time detection of names, addresses, salary figures, and personal identifiers across CVs and interview notes. Warns the recruiter and gives them options: cancel, redact, edit, or override with a documented justification.
Prove governance works
Compliance reports with audit trails. When a client asks about your data handling practices, or the ICO asks about your AI governance, show them evidence rather than an acceptable use policy.
What this looks like in practice
The CV batch screening
A resourcer pastes 20 CVs into ChatGPT to rank candidates against a job specification. The extension picks up names, addresses, employers, and salary figures across every CV. The resourcer chooses to redact personal identifiers and proceeds with anonymised summaries.
Interview notes to client feedback
Detailed debrief notes typed into Claude to draft a professional candidate summary. Vireo catches the candidate's name, current employer, salary, and performance observations. The consultant chooses to remove identifiers before generating output.
The confidential client brief
An internal hiring brief lands in Gemini to draft the job advertisement. Vireo flags the company name, reporting structure, budget range, and reason for vacancy. Override justifications are recorded.
Reference check compilation
Three referees' notes pasted into ChatGPT to create a summary report. Vireo catches referee names, contact details, and specific performance commentary. Each flagged item is documented.
Built for recruitment agencies
Warns, doesn't block
Consultants keep placing candidates. Choices, not roadblocks.
Deploys in minutes
Browser extension installs in minutes. No infrastructure changes needed.
Privacy by design
Personal information detected and handled in the browser, before it reaches our servers.
Affordable
Governance that doesn't need an enterprise budget. Built for agencies that measure success in placements, not IT budgets.
Explainable detection
Pattern matching, not AI analysing AI. When the ICO or a client audit asks how it works, you can give them a straight answer.
See how your agency uses AI
Start freeVireo Sentinel supports your compliance efforts but does not provide legal advice. You remain responsible for your organisation's compliance obligations.