โ† Back to Resources

What is the 30% Rule for AI?

The 30% rule is a guideline for how to divide work between AI and humans. The basic idea: AI and automation should handle around 70% of routine, repetitive tasks. Humans should focus on the remaining 30%, which requires creativity, judgment, empathy, and strategic thinking.

It's not a hard rule. There's no research paper that landed on exactly 30%. But as a framework for thinking about AI adoption, it's surprisingly useful.

Where the concept comes from

The 30% rule emerged from practical observation of how AI works best.

AI excels at structured, repetitive work. Processing data. Generating first drafts. Identifying patterns. Handling routine queries. These tasks are predictable, and predictability is what AI needs to perform well.

Humans excel at ambiguity. Interpreting context. Making ethical decisions. Building relationships. Navigating situations that don't fit existing patterns. This is the 30% that machines can't replicate.

Tsedal Neeley's research on digital mindset draws an interesting parallel: non-native speakers can achieve effective workplace communication with about 3,500 words, roughly 30% of the 12,000 required for native-like mastery. The insight is that focused effort on the right 30% can be remarkably effective.

How it works in practice

Consider a corporate lawyer preparing for a client meeting.

Before AI, the team would manually review hundreds of pages of contracts and precedents. Hours of work, much of it tedious pattern-matching.

With AI, that document review can be automated. The AI finds relevant clauses, flags inconsistencies, and summarises key points. This might represent 70% of the preparation work.

But interpreting edge cases, weighing business risk, and advising the client? That's still the lawyer's job. That's the 30%, and it's where human expertise is irreplaceable.

Examples across industries

Healthcare: AI handles anomaly detection in scans and routine diagnostic screening. Doctors focus on complex cases, patient relationships, and treatment decisions.

Finance: AI manages fraud alerts, first-pass modelling, and transaction monitoring. Analysts focus on strategic recommendations and client advisory.

Customer support: AI resolves common queries through chatbots and auto-responses. Support staff handle escalations, complex problems, and relationship management.

Software development: AI generates boilerplate code, writes test cases, and suggests refactoring. Developers focus on architecture, problem-solving, and review.

Marketing: AI drafts content, analyses data, and personalises campaigns. Marketers focus on strategy, creative direction, and brand decisions.

The pattern is consistent. AI handles the predictable 70%. Humans handle the judgment-intensive 30%.

Why this matters for AI governance

Here's the connection that often gets missed: even if AI is only doing 70% of the work, it's still processing your data.

The 30% rule is about productivity. AI governance is about risk.

Your team might use AI to handle the routine parts of their job. But consider the data they feed into those tools. Customer information from support queries. Financial data from modelling exercises. Proprietary code from development tasks. Strategic content from marketing campaigns. The 70% that AI handles often contains the most sensitive information.

The 30% rule tells you AI can boost productivity significantly. It doesn't tell you anything about whether that AI usage is safe.

The governance gap

Organisations often embrace AI for productivity while neglecting the risk side.

They see the 70% efficiency gains and celebrate. They don't see the data flowing into external AI systems. They don't know which tools their team is using. They don't have policies for what information should and shouldn't be shared.

This creates shadow AI: employees using AI tools without oversight, often with good intentions and without understanding the risks they're creating.

The 30% rule is a productivity framework. But every organisation applying it also needs an accompanying governance framework.

Balancing productivity and protection

The goal isn't to choose between AI productivity and AI governance. You can have both.

Start with visibility. Know which AI tools your team uses and how they use them. You can't govern what you can't see. From there, set clear boundaries around what data should never go into external AI tools, regardless of how useful the output might be.

If employees need AI to do their jobs well (and they probably do), give them sanctioned options with appropriate data protections. Enterprise AI tools, internal assistants, approved workflows.

Pay particular attention to the routine work. That 70% that AI handles? It's often exactly where sensitive data lives. Customer queries contain personal information. Financial models contain confidential figures. Code contains intellectual property. The mundane work isn't always low-risk work.

The human 30%

One more dimension worth considering: the 30% isn't just about task allocation. It's about the skills that matter most.

As AI takes over routine work, human value shifts. Creativity matters more: generating novel ideas, seeing connections that aren't obvious from patterns. Judgment matters more: making decisions when the data is ambiguous or incomplete. Ethics matters more: determining what should be done, not just what can be done. So do relationships and oversight. Building trust, communicating nuance, reviewing AI outputs, catching errors.

These skills become more valuable, not less, in an AI-augmented workplace. But they also require humans to stay engaged with the work, not just to delegate everything to machines.

Applying this to your organisation

If you're thinking about the 30% rule for your business, start by identifying where AI can take over routine work. Look for tasks that are repetitive, pattern-based, and time-consuming. These are your candidates for the 70%.

Then consider where human judgment is essential. Decisions with ethical implications, ambiguous inputs, or significant consequences. This is your 30%, and it needs to stay human.

But don't stop there. Ask what data flows through that 70%. Even routine tasks involve information that needs governance. Customer data, financial figures, proprietary content. None of it becomes less sensitive just because AI is processing it.

And finally: do you actually have visibility into AI usage? If your team is applying the 30% rule, consciously or not, do you know which tools they're using and what they're sharing?

The bottom line

The 30% rule is a helpful framework for thinking about AI adoption. Let AI handle routine work. Focus human effort on what humans do best.

But productivity is only half the equation. The data that flows through AI-assisted work still needs protection. The tools your team uses still need visibility. The policies governing AI usage still need to exist.

Embrace the 30% rule for efficiency. Build AI governance for safety. Do both, and AI becomes an asset rather than a liability.

Get visibility into your team's AI usage

Vireo Sentinel gives you visibility into how your team uses AI tools. Even as AI takes on more routine work, you need to know what data is flowing through those systems.

Get Started Free

Related articles