Valliance logo in black
Valliance logo in black

Running from the shadows: Balancing governance and innovation

Dec 17, 2025

·

10 Mins

Dec 17, 2025

·

10 Mins

Dec 17, 2025

·

10 Mins

Dec 17, 2025

·

10 Mins

Topics

People-First

People-First

People-First

Risk Management

Risk Management

Risk Management

Privacy & Data Governance

Privacy & Data Governance

Privacy & Data Governance

AI Governance

AI Governance

AI Governance

AI Transparency

AI Transparency

AI Transparency

What does the level of shadow AI use at your company say about your leadership, and your governance?

Well, let’s start by reassuring you. No reputable company in the world doesn’t have any shadow AI use at all - so you ’re either not remotely concerned about this because you have it in hand, or you are in one of these buckets;

  • Prohibition - The governance probably says simply ‘no AI’. In this case, you should expect that between 30% and 60% of your people are using personal AI, often on work devices, for work purposes.

  • Islands - IT says ‘we’re exploring AI in a few areas, but not ready for a major rollout’. The reality is that, as well as the personal use, there will be at least one or two teams or departments outside those test programmes who have started to use things that are effective for them.

  • Hybrid - ‘You may only use the official tool (even though it’s not good at everything’. You will have reduced shadow AI, but it will still be there for the use-cases not handled well by the official tool. Given that each of the tools and LLMs are good at different things, it’s not likely you’ll reduce this if you are a one tool organisation.

But even if you have it fully handled, and have an optimal set of tooling, and a really well balanced governance system that enables AI innovation, in a measured, risk managed way, you will still have some shadow AI usage.

Let’s consider those shadow users, and what they’re trying to achieve by looking back at a pre-AI era;

From Shadows to Spotlight

Two ‘out of governance’ things that changed the world. Mountain Bikes: Dodgy customisations that created an industry

In 1970s California, cyclists bolted motorcycle brakes and fat tyres onto old bikes to race (illegally) down Mount Tamalpais. Bike shops refused to sell them parts, and manufacturers said these "klunkers" were destroying perfectly good bicycles. But these modifications let ordinary people explore mountains, forests, any terrain - pure freedom and adventure beating efficiency every time. Mountain bikes now represent 25% of global bicycle sales, worth $6 billion annually.

McDonald's Drive-Through: The unauthorised hole that changed fast food

A franchise owner cut an unauthorised hole in his wall to serve soldiers who couldn't leave their vehicles in uniform, and McDonald's corporate was not pleased. But that hole solved problems nobody had even articulated - parents with sleeping babies, disabled customers, anyone in a hurry. Drive-throughs now generate 70% of McDonald's revenue, about $150 billion annually.

These shadow actors weren't setting out to cause trouble - they were practically, and provably finding value. They did of course potentially incur a decent amount of risk in the process, but fortunately the worst that happened was the odd broken bone in the Repack Downhill mountain bike races.

Now, let’s think about your shadow AI users. Those users who use AI tools, be it browser plugins, personal devices, or web based services, that don’t fall within your governance, almost certainly don’t comply with your data residency rules, or have the audit and safety features that you insist on.

Are they out to ruin you? Or are they onto something?

Well, let’s be honest, both things are true at the same time. They might ruin you, but they also might be on to something.

Meet Dave. Dave works in procurement. He’s recently discovered that Chat GPT is rather good at some things that his other tools aren’t. His crowning achievement? A predictive vendor reliability model. It’s genuinely very cool and is saving you hundreds of thousands of pounds. The downside? He’s just uploaded your entire vendor database to a frontier LLM that learns from user data.

Now meet Cheryl. Cheryl plays by the rules. She’s also pretty smart with AI tools and without breaching governance, has worked up the same model, using only dummy data at home. She presents it back to an AI working group. They all nodded and said it was great, but that was six months ago and nothing’s happened since. Cheryl has now gone to work at a rival firm who have a very different approach to AI governance.

You might not want to acknowledge it, but your organisation is full of Daves and Cheryls. While one sails close to the wind, the other becomes more and more frustrated. So, how do you strike that balance between empowering people to make AI enabled improvements and ensuring they don’t lead you to an unlimited fine from the privacy regulator?

When your people find better ways to do things with unauthorised AI, they’re either going to get more sneaky about it, or increasingly frustrated.

Here's the thing: You can either pretend this isn't happening, or you can do something rather clever about it.

What if we trusted our team’s instincts?

Most Shadow AI exists because of a simple formula:

People want to do good work > AI helps them do good work > There’s no legitimate way to do it > Therefore: shadows

Those are good instincts, so what if we ran with them, and instead of banning, we provided those individuals with the tools and the smarts to not take risks we don’t want them to take, and the support to help their ideas come to life in so much better a way than they ever could do on their own? What if we then recognised their initiative, and rewarded them for it?

In other words, what if we took things from the shadows and put them in the spotlight?

The Trust Balancing Act

So let’s assume we want a balanced approach that untaps people’s potential and enables innovation, but in a way that mitigates risk because of course there are genuine risks associated with AI use in the workplace, and most people don’t truly understand them enough not to make the kind of mistakes that could end up being very costly. Do we make people wait until we’ve legislated for all the risks, and risk that whilst they wait, increase the chance they’ll turn to the shadows? (Hint: AI is moving so fast that the moment you legislate for one, another will pop up) Or do we mandate the one safe, albeit limited, tool where we believe we have ironed out all these risks, but know that there will still be unauthorised tools that will do more that our people will still be tempted to use?

At Valliance we’ve seen much greater success with a more People-First approach.

At the heart of this approach is enablement. That means three things;

  • Build understanding - Giving people a truly practical and hands-on understanding of the opportunites, and importantly, the pitfalls, of AI. This has to start at the top with the Board. This is not training. This is practical, hands-on, "let me show you something terrifying/brilliant" enablement that builds genuine understanding, and confidence to navigate AI risks.

  • Provide the tools, and practical technical skills to turn good things into brilliant things - You don’t get shadow use if the official tooling is good. But then, the problem you have is that it’s very satisfying for anyone to start things, and generate quick dopamine hits of AI doing something cool. But often, people run out of steam, patience, or skills in how to turn that into something that really works, and is hardened for organisation-wide rollout. Evaluations, feedback, solving for edge cases, versioning, auditability, traceability, improving quality over time, and so on.

  • Turning shadow users into spotlight users - If we really wrapped our arms around these innovators and gave them the right insight, and support, we could generate a stunning library of compliant things that will really create value across the organisation, and inspire others to do the same.

Practically speaking this means taking people on a simple journey to empowerment and recognition;

  1. Upskilling the C-Suite, but remaining humble to create psychological safety

I’m sorry, but you’re going to have to start telling people that you’re not an AI expert. C-Suite leaders who claim to know it all, are only going to create barriers to honesty and openness on the topic. An environment of psychological safety is critical to getting people to share what they’ve tried, and acknowledge what else they need to learn to be effective and safe. At the same time however, it’s important you start to role model the use of AI in the workplace that you want others to adopt. This means you’re going to need to be upskilled too in very practical ways, as well as the theory on risks and opportunities.

  1. Discuss what trouble looks like!

Be crystal clear about the absolute no-go zones. Be brutal about bringing to life the examples of where things go really wrong, and help people understand WHY. For example; Sensitive data in public AI tools, AI-generated anything that touches regulatory filings or letting AI make decisions about actual humans without human review; and educating people about data leakage, prompt injections, hallucinations and so on.

  1. Give people a playground

Give them somewhere safe to experiment, and show the potential using internal or synthetic data. Now get out of their way and let them play.

  1. Create a route from shadow to spotlight

Create a path from "I'm mucking about with ChatGPT" to "We built something properly brilliant". Usually this means providing the practical expertise to take an amateur prompt within one team to an enterprise-wide solid piece of AI. For many organisations these are skills that you don’t currently have.

  1. Make Hero(in)es of the Careful Innovators

Don't celebrate the cowboy who built an AI tool that could have leaked data but didn't (this time). Celebrate Jennifer who built something brilliant AND got Security involved early AND documented everything AND made it reproducible. Make Jennifer famous. Give Jennifer a bonus. Be like Jennifer.

  1. Evolve governance as you go

You’ll have started with some very simple rules, and do’s and don’ts, but it’s critical to evolve governance so that it learns from the experience you’re gaining. Your success stories, and the interventions you have made along the way will be valuable evidence that governance isn’t just your ‘we don’t control what our people do’ waiver.

The Bottom Line

Your Shadow AI problem is actually a Shadow AI opportunity. The shadows are full of employees teaching themselves prompt engineering on their lunch breaks, building solutions to problems you didn't even know you had, one security review away from revolutionising how you work. They're not trying to undermine your governance. They're trying to do their jobs better.

The question isn't whether you'll allow AI in your organisation - that ship has sailed, caught fire, and been rebuilt as a spaceship by Dave from Procurement. The question is whether you'll help them do it brilliantly.

Shadow AI isn't your enemy. Ignorance is. Good governance isn't about control. It's about making the safe path the easy path.

And if you're wondering whether your people are ready for this level of AI enablement: they're already doing it. You're just deciding whether to help them do it well or not.

Can we help?

It never hurts to talk… we’re always open to a chat, whether it’s bouncing your ideas off us, or you’re actively looking for someone to help you take the next step in realising the benefits of the AI tooling you’ve invested in.

Topics

People-First

People-First

People-First

Risk Management

Risk Management

Risk Management

Privacy & Data Governance

Privacy & Data Governance

Privacy & Data Governance

AI Governance

AI Governance

AI Governance

AI Transparency

What does the level of shadow AI use at your company say about your leadership, and your governance?

Well, let’s start by reassuring you. No reputable company in the world doesn’t have any shadow AI use at all - so you ’re either not remotely concerned about this because you have it in hand, or you are in one of these buckets;

  • Prohibition - The governance probably says simply ‘no AI’. In this case, you should expect that between 30% and 60% of your people are using personal AI, often on work devices, for work purposes.

  • Islands - IT says ‘we’re exploring AI in a few areas, but not ready for a major rollout’. The reality is that, as well as the personal use, there will be at least one or two teams or departments outside those test programmes who have started to use things that are effective for them.

  • Hybrid - ‘You may only use the official tool (even though it’s not good at everything’. You will have reduced shadow AI, but it will still be there for the use-cases not handled well by the official tool. Given that each of the tools and LLMs are good at different things, it’s not likely you’ll reduce this if you are a one tool organisation.

But even if you have it fully handled, and have an optimal set of tooling, and a really well balanced governance system that enables AI innovation, in a measured, risk managed way, you will still have some shadow AI usage.

Let’s consider those shadow users, and what they’re trying to achieve by looking back at a pre-AI era;

From Shadows to Spotlight

Two ‘out of governance’ things that changed the world. Mountain Bikes: Dodgy customisations that created an industry

In 1970s California, cyclists bolted motorcycle brakes and fat tyres onto old bikes to race (illegally) down Mount Tamalpais. Bike shops refused to sell them parts, and manufacturers said these "klunkers" were destroying perfectly good bicycles. But these modifications let ordinary people explore mountains, forests, any terrain - pure freedom and adventure beating efficiency every time. Mountain bikes now represent 25% of global bicycle sales, worth $6 billion annually.

McDonald's Drive-Through: The unauthorised hole that changed fast food

A franchise owner cut an unauthorised hole in his wall to serve soldiers who couldn't leave their vehicles in uniform, and McDonald's corporate was not pleased. But that hole solved problems nobody had even articulated - parents with sleeping babies, disabled customers, anyone in a hurry. Drive-throughs now generate 70% of McDonald's revenue, about $150 billion annually.

These shadow actors weren't setting out to cause trouble - they were practically, and provably finding value. They did of course potentially incur a decent amount of risk in the process, but fortunately the worst that happened was the odd broken bone in the Repack Downhill mountain bike races.

Now, let’s think about your shadow AI users. Those users who use AI tools, be it browser plugins, personal devices, or web based services, that don’t fall within your governance, almost certainly don’t comply with your data residency rules, or have the audit and safety features that you insist on.

Are they out to ruin you? Or are they onto something?

Well, let’s be honest, both things are true at the same time. They might ruin you, but they also might be on to something.

Meet Dave. Dave works in procurement. He’s recently discovered that Chat GPT is rather good at some things that his other tools aren’t. His crowning achievement? A predictive vendor reliability model. It’s genuinely very cool and is saving you hundreds of thousands of pounds. The downside? He’s just uploaded your entire vendor database to a frontier LLM that learns from user data.

Now meet Cheryl. Cheryl plays by the rules. She’s also pretty smart with AI tools and without breaching governance, has worked up the same model, using only dummy data at home. She presents it back to an AI working group. They all nodded and said it was great, but that was six months ago and nothing’s happened since. Cheryl has now gone to work at a rival firm who have a very different approach to AI governance.

You might not want to acknowledge it, but your organisation is full of Daves and Cheryls. While one sails close to the wind, the other becomes more and more frustrated. So, how do you strike that balance between empowering people to make AI enabled improvements and ensuring they don’t lead you to an unlimited fine from the privacy regulator?

When your people find better ways to do things with unauthorised AI, they’re either going to get more sneaky about it, or increasingly frustrated.

Here's the thing: You can either pretend this isn't happening, or you can do something rather clever about it.

What if we trusted our team’s instincts?

Most Shadow AI exists because of a simple formula:

People want to do good work > AI helps them do good work > There’s no legitimate way to do it > Therefore: shadows

Those are good instincts, so what if we ran with them, and instead of banning, we provided those individuals with the tools and the smarts to not take risks we don’t want them to take, and the support to help their ideas come to life in so much better a way than they ever could do on their own? What if we then recognised their initiative, and rewarded them for it?

In other words, what if we took things from the shadows and put them in the spotlight?

The Trust Balancing Act

So let’s assume we want a balanced approach that untaps people’s potential and enables innovation, but in a way that mitigates risk because of course there are genuine risks associated with AI use in the workplace, and most people don’t truly understand them enough not to make the kind of mistakes that could end up being very costly. Do we make people wait until we’ve legislated for all the risks, and risk that whilst they wait, increase the chance they’ll turn to the shadows? (Hint: AI is moving so fast that the moment you legislate for one, another will pop up) Or do we mandate the one safe, albeit limited, tool where we believe we have ironed out all these risks, but know that there will still be unauthorised tools that will do more that our people will still be tempted to use?

At Valliance we’ve seen much greater success with a more People-First approach.

At the heart of this approach is enablement. That means three things;

  • Build understanding - Giving people a truly practical and hands-on understanding of the opportunites, and importantly, the pitfalls, of AI. This has to start at the top with the Board. This is not training. This is practical, hands-on, "let me show you something terrifying/brilliant" enablement that builds genuine understanding, and confidence to navigate AI risks.

  • Provide the tools, and practical technical skills to turn good things into brilliant things - You don’t get shadow use if the official tooling is good. But then, the problem you have is that it’s very satisfying for anyone to start things, and generate quick dopamine hits of AI doing something cool. But often, people run out of steam, patience, or skills in how to turn that into something that really works, and is hardened for organisation-wide rollout. Evaluations, feedback, solving for edge cases, versioning, auditability, traceability, improving quality over time, and so on.

  • Turning shadow users into spotlight users - If we really wrapped our arms around these innovators and gave them the right insight, and support, we could generate a stunning library of compliant things that will really create value across the organisation, and inspire others to do the same.

Practically speaking this means taking people on a simple journey to empowerment and recognition;

  1. Upskilling the C-Suite, but remaining humble to create psychological safety

I’m sorry, but you’re going to have to start telling people that you’re not an AI expert. C-Suite leaders who claim to know it all, are only going to create barriers to honesty and openness on the topic. An environment of psychological safety is critical to getting people to share what they’ve tried, and acknowledge what else they need to learn to be effective and safe. At the same time however, it’s important you start to role model the use of AI in the workplace that you want others to adopt. This means you’re going to need to be upskilled too in very practical ways, as well as the theory on risks and opportunities.

  1. Discuss what trouble looks like!

Be crystal clear about the absolute no-go zones. Be brutal about bringing to life the examples of where things go really wrong, and help people understand WHY. For example; Sensitive data in public AI tools, AI-generated anything that touches regulatory filings or letting AI make decisions about actual humans without human review; and educating people about data leakage, prompt injections, hallucinations and so on.

  1. Give people a playground

Give them somewhere safe to experiment, and show the potential using internal or synthetic data. Now get out of their way and let them play.

  1. Create a route from shadow to spotlight

Create a path from "I'm mucking about with ChatGPT" to "We built something properly brilliant". Usually this means providing the practical expertise to take an amateur prompt within one team to an enterprise-wide solid piece of AI. For many organisations these are skills that you don’t currently have.

  1. Make Hero(in)es of the Careful Innovators

Don't celebrate the cowboy who built an AI tool that could have leaked data but didn't (this time). Celebrate Jennifer who built something brilliant AND got Security involved early AND documented everything AND made it reproducible. Make Jennifer famous. Give Jennifer a bonus. Be like Jennifer.

  1. Evolve governance as you go

You’ll have started with some very simple rules, and do’s and don’ts, but it’s critical to evolve governance so that it learns from the experience you’re gaining. Your success stories, and the interventions you have made along the way will be valuable evidence that governance isn’t just your ‘we don’t control what our people do’ waiver.

The Bottom Line

Your Shadow AI problem is actually a Shadow AI opportunity. The shadows are full of employees teaching themselves prompt engineering on their lunch breaks, building solutions to problems you didn't even know you had, one security review away from revolutionising how you work. They're not trying to undermine your governance. They're trying to do their jobs better.

The question isn't whether you'll allow AI in your organisation - that ship has sailed, caught fire, and been rebuilt as a spaceship by Dave from Procurement. The question is whether you'll help them do it brilliantly.

Shadow AI isn't your enemy. Ignorance is. Good governance isn't about control. It's about making the safe path the easy path.

And if you're wondering whether your people are ready for this level of AI enablement: they're already doing it. You're just deciding whether to help them do it well or not.

Can we help?

It never hurts to talk… we’re always open to a chat, whether it’s bouncing your ideas off us, or you’re actively looking for someone to help you take the next step in realising the benefits of the AI tooling you’ve invested in.

Topics

People-First

Risk Management

Privacy & Data Governance

AI Governance

Are you ready to shape the future enterprise?

Get in touch, and let's talk about what's next.

Are you ready to shape the future enterprise?

Get in touch, and let's talk about what's next.

_Related thinking
_Related thinking
_Related thinking
_Related thinking
_Related thinking
_Explore our themes
_Explore our themes
_Explore our themes
_Explore our themes

Seasons Greetings

Seasons Greetings

Seasons Greetings

Seasons Greetings

Seasons Greetings

Let’s put AI to work.

Copyright © 2025 Valliance. All rights reserved.

Let’s put AI to work.

Copyright © 2025 Valliance. All rights reserved.

Let’s put AI to work.

Copyright © 2025 Valliance. All rights reserved.

Let’s put AI to work.

Copyright © 2025 Valliance. All rights reserved.