Why professional services must secure AI before they scale it

Why professional services must secure AI before they scale it

05 November 2025 Consultancy.uk
Why professional services must secure AI before they scale it

Across the board, professional services are ramping up their AI adoption to match the speed of business and align with market demands. But according to Joris van der Gucht, CEO and co-founder of Ravical, the controls to manage these tools are not being implemented fast enough.

More firms are experimenting with AI across different functions, but where perhaps businesses lack a cohesive adoption strategy and teams are being left to their own devices, the element of control is something left to be desired. Without guardrails, managers lose track of where AI sits within their organisation, how it’s being used by each business unit and whether all of this aligns with their compliance requirements.

This is when AI becomes black boxes in business systems. It’s not always clear from the outputs of these tools how they got there or what data was used, and so firms are unable to explain or verify the process behind a result. The risk is, firms leave themselves exposed to compliance breaches when consumer AI tools are used with sensitive client and employee information.

Why is shadow AI becoming such a problem?

From conversations with partners and firm owners, we know that advisors are quietly using ChatGPT or similar tools for day-to-day tasks like drafting client emails, reviewing technical content or summarising long documents. What’s less clear, but still very possible, is whether some teams have gone a step further and feed in client details to give the model more context. It’s important to recognise that these practices are not done with bad intent, instead just as a way to save time and improve communication quality. The problem is that it still happens in isolation, outside any approved workflow or data safeguards, so the firm has no view of what’s being shared or how outputs are being used.

That’s where they start to lose control over both data and quality. Sensitive client information can easily be shared with public models, and, even internally, advice may be based on unverified outputs that look confident but are in fact inaccurate. This can lead to an inconsistent tone, misaligned advice or even accidental data breaches.

The real risk isn’t the technology itself, it’s the lack of traceability and human review, and placing too much confidence in AI tools without the right measures. As a deterrent, some firms have implemented policies to block the use of tools like ChatGPT, but it’s very hard to govern given that individuals can still use it on their personal devices. Although firms are doing everything they can to educate employees through the likes of policies and security training, it’s still relatively uncontrollable. When the benefits are so

widely known, the temptation to secretly use AI is great. But this is clearly a huge compliance risk for expert firms handling sensitive client data.

Ultimately, the rise of shadow AI becomes a leadership responsibility. Advisors turn to their own tools in an effort to work more efficiently, which usually means the firm hasn’t caught up yet. It cannot be labelled as a technology failure – it’s up to business leaders to provide clear, supported ways to use AI responsibly. The fix is therefore cultural and structural, not just technical.

The growing need for explainable AI

Providing clear guidelines on when and how AI tools can be used is step one. Then it’s about putting it into practice. If advisors have a clear reference point of acceptable and responsible AI use, the risk of shadow IT sneaking in should significantly lessen.

Beyond greater visibility of AI usage, firms also need guarantees that explainability is being combined with governance and oversight. If it’s being used in line with client communications, for instance, it must be able to demonstrate its reasoning and the sources used so that advisors can follow the logic and judge its quality. Alongside this, guardrails should be applied to ensure the firm’s tone of voice, templates and standards are consistently applied across every output.

This level of transparency cannot be delivered by generic, off-the-shelf tools. It demands careful design, close collaboration between technologists and practitioners, and continuous refinement as both regulations and client expectations evolve. When firms achieve this balance, adoption becomes safer and more effective. It shows where human expertise remains essential, while also allowing services to scale in a way that does not erode trust.

Scaling new services without compromising trust or compliance

Experimentation with AI is a healthy practice and should be encouraged, but there’s a clear threshold that must be upheld. Once AI touches client data, client communication or anything that influences professional advice, it needs to happen inside a controlled system. Firms can run sandboxes or internal pilots to explore tools safely, but using unapproved AI for genuine work is where experimentation turns into risk exposure.

AI is such a powerful tool for professional services, which is why shadow AI has become such a prolific issue. Firms must normalise AI as part of everyday work rather than treating it as an add-on or wrapping it in unnecessary red tape. Leaders should provide their teams with approved tools that are auditable, transparent and easy to trust, before then setting clear boundaries: what’s allowed, what needs to be reviewed and where human oversight fits in.

When individuals see that AI can be used as a trusted tool, they’ll start to use it more openly and responsibly. Rather than slowing innovation, good AI governance actually unlocks it, giving professionals confidence that AI supports their judgement rather than undermines it.