Most organisations are already using AI.
In some organisations, AI has been introduced through formal programmes and structured rollouts. In others, it’s been far more organic, becoming part of day-to-day work. People are using tools to summarise documents, draft emails, sense-check ideas or speed up tasks that used to take longer. Suppliers are doing the same in the background. In many cases, it’s crept in alongside formal adoption.
That’s the reality most organisations are now dealing with. The question isn’t whether to adopt AI. It’s whether you have any real control over how it’s being used.
Where the risk actually sits
A lot of the conversation around AI still focuses on what might happen in the future. From what we’re seeing, the more immediate issue is much simpler. People are already putting information into tools they don’t fully understand.
That might be internal documents, bits of sensitive data, or content that gets reworked and shared elsewhere. It’s rarely done with any bad intent. Most of the time it’s just someone trying to do their job a bit faster. But it does change where your information goes and who has visibility of it and that’s the bit many organisations haven’t quite caught up with yet.
Why existing controls don’t quite land
Most organisations aren’t starting from nothing. They already have policies that should, in theory, cover this. Data protection. Acceptable use. Information handling. In some cases, even AI-specific policies.
The problem is those controls weren’t written for how people are actually using these tools. They assume people know what counts as sensitive, which tools are approved, where data ends up and how outputs should be checked and those lines can get blurred very quickly.
If someone pastes a paragraph into a tool to tidy it up or summarise it, it doesn’t feel like a security decision it feels like getting on with work and that’s where the gap sits. The policy exists, but it’s not being applied in the moment.
The control problem
On paper, AI controls often look solid.
There are restrictions, approval processes and supplier checks – all the right things are there.
But when you look a bit closer, a few common issues tend to come up. Organisations don’t have a clear view of which tools are actually being used. There’s very little visibility of what happens to data once it leaves internal systems. Suppliers are using AI without it always being fully understood. Different teams have completely different interpretations of what “acceptable use” means in practice. None of this is unusual. It just means the controls haven’t quite caught up with reality yet.
The part that often gets missed: how the tools are configured
A lot of the focus so far has been on behaviour, and rightly so. But there’s another part of the problem that tends to get less attention. Even where organisations have policies in place and a reasonable understanding of how AI is being used, the actual systems themselves are not always configured to support those controls.
This is where control either holds up or falls away. In many environments, AI functionality is being introduced through existing platforms, email systems, collaboration tools and third-party software. Features are switched on; integrations are enabled and new capabilities become available to users very quickly.
Whether that creates risk or not often comes down to how those systems are configured.
Things like:
- what data AI tools can access
- whether sensitive information can be used or exported
- how outputs are stored or shared
- what logging or audit capability exists
- how access is controlled across different user groups
These are not policy questions instead they are configuration decisions.
If those controls aren’t set up properly, organisations can find themselves in a position where the policy says one thing but the system allows something else entirely.
You can’t control what you can’t see
Before anything else, organisations need a clearer picture of what’s happening.
- Where are people using AI?
- What kind of information is going into those tools?
- Which suppliers are building it into their services?
This isn’t about shutting it all down, that’s not realistic and in most cases it wouldn’t make sense anyway.
It’s about understanding what’s already happening so you can decide where control is needed.
This is as much about behaviour as it is technology
People use these tools because they help them get through their workload. There’s a real pressure in the market to move faster, respond quicker and do more with less.
In some cases, there’s also an expectation to adopt it. Teams are being encouraged to use AI to improve efficiency, even if the guardrails aren’t fully clear yet. If something saves time, it gets used. If a process feels unclear or slows things down, people tend to work around it.
That’s why controls need to reflect how people actually work day to day. In practice, that means giving people clear guidance they can apply, showing what good use looks like in real scenarios and being honest about where the grey areas are. If people understand the “why”, they’re much more likely to apply the control properly.
The bit that sits outside your line of sight
There’s also a growing issue around third parties. A lot of suppliers are now using AI in their own delivery, whether that’s for processing data, automating tasks or generating outputs and in some cases, it’s not always obvious.
That creates a layer of risk that sits slightly outside your direct control.
It’s worth asking how suppliers are using AI as part of their service, what happens to your data once it’s in their environment and where the boundaries sit.
For most organisations, this is still evolving but it’s an area that’s only going to get more important.
A more realistic way of thinking about control
Trying to lock AI down completely isn’t going to work. It’s already too embedded in how people work.
A more practical approach is to focus on a few things that make a difference.
- Don’t keep your controls hidden in policy. Communicate more directly to your staff about AI usage.
- Be clear about what should and shouldn’t go into these tools, using real examples.
- Get a better understanding of where AI is being used, internally and externally.
- Make sure people know how to use it properly and where the risks sit.
- Keep an eye on where information is appearing and how it’s being used.
- Test how your controls hold up in practice, not just on paper.
Where this leaves organisations
AI is part of how most organisations operate today whether that’s been formally acknowledged or not. The difference now is between organisations that understand how it’s being used and those that don’t.
The ones that tend to stay ahead aren’t the ones with the longest policies. They’re the ones that have a clearer view of what’s happening, invest in helping people use these tools properly and make sure the systems behind them are set up in a way that supports those controls.
Because in the end, control isn’t just what’s written down. It’s what people do under pressure and what the systems they rely on allow them to do in that moment.
