Frequently asked questions
Everything you need to know about AI visibility, data protection, and how Vireo Sentinel works.
Getting started
About 10 minutes for the admin, under 2 minutes per employee. You create an account, set up your organisation, and invite your team. They install a browser extension. That's it. No network configuration, no proxy setup, no IT tickets.
Usage data starts flowing as soon as the extension is installed. Most teams see a full picture of their AI activity within the first week. The dashboard shows which tools are being used, how often, what categories of work, and where sensitive data has been detected.
Chrome, Firefox, Edge, and Brave. The extension works on any Chromium-based browser. Each user licence covers up to 5 devices, so people can install it on their work laptop and desktop without needing separate licences.
Data protection and privacy
Over 50 detection patterns covering personal information (names, emails, phone numbers, national IDs), financial data (credit cards, bank accounts, tax file numbers), technical credentials (API keys, passwords, access tokens), and confidential business information (client names, project codes). Detection happens in the browser before anything reaches the AI platform, so the data never leaves if your team chooses to remove it.
Risk detection happens in the browser before anything leaves your device. Sensitive data is redacted before it reaches our servers. We store metadata about AI interactions (who, when, which platform, risk score) but the actual prompt content is redacted. Your organisation's data is completely isolated from other customers.
This comes up a lot. Vireo shows interventions to users, not silent surveillance. When someone's about to share sensitive data, they see a prompt asking them to reconsider. They can still proceed if they have a good reason. Your team stays in control. Most companies find that once employees understand it's about protecting them (and the company) rather than watching them, the resistance disappears.
That's where most data leakage actually happens. The prompt might look harmless: "Draft a reply to John Smith about the Anderson account." But it just sent a real person's name and a client relationship to an external AI platform. Multiply that across every person on your team, every day, and the exposure adds up fast. The risk isn't the tool. It's what goes into it.
How it compares
Traditional DLP watches network traffic and file movements. It wasn't built for AI. When someone types confidential information directly into ChatGPT, most DLP tools don't see it because there's no file to scan. Vireo works at the browser level, catching data at the point of entry. If you have enterprise DLP, Vireo fills the AI-specific gap.
Platform security features protect data within that platform. They don't give you visibility into what's being shared or track patterns across tools. Your team probably uses more than one AI platform. You need visibility and protection across all of them, not just the one you're paying for.
Bans don't work. They push AI usage underground where you have zero visibility. Your team is already using AI tools. The choice isn't whether they use AI, but whether you can see what's happening when they do. Visibility lets you support productive AI use while catching genuine risks.
DLP blocks and logs. Vireo does something different: it gives your team visibility and options. When we detect a risk, the employee sees what was flagged and can choose to cancel, edit, redact, or override with a reason. That creates a culture of awareness rather than a game of cat and mouse. Plus, the analytics and audit trails are built specifically for AI workflows, not retrofitted from network security tools.
Reporting and compliance
Yes. Vireo generates a report showing your AI systems inventory, risk controls, and how effective your data protection actually is. Export it as a PDF any time. Some customers attach it to client proposals to show they take data handling seriously. Others use it for board reporting or internal audit. It also maps to EU AI Act, ISO 42001, and Australian Privacy Act requirements if you need the regulatory angle.
Absolutely. Most of our users don't start with a regulatory requirement. They start because they want to know what their team is actually doing with AI and make sure sensitive data isn't leaking out. That's a business risk question, not a legal one. The reporting tools are there if you ever need them for an audit or client request, but the core value is visibility and protection.
Probably not yet. Full certification involves external audits and real investment. But aligning with the framework, having documented AI policies, risk controls, and monitoring evidence, gives you most of the benefit without the cost. When a client asks "how do you manage AI risk?", showing them actual usage data and intervention reports beats a policy document every time.
Size doesn't determine risk. A 10-person company sharing client data with ChatGPT faces the same data leakage exposure as a 1,000-person enterprise. The difference is that smaller teams often have less visibility into what's actually being shared. Vireo is built for teams of 5 to 500 and priced accordingly. The free tier lets you try it with no commitment.
Still have questions?
Start with a free account and see for yourself, or get in touch and we'll walk you through it.