Consultants must not be caught sleeping at the wheel with AI-generated input

Consultants must not be caught sleeping at the wheel with AI-generated input

20 October 2025 Consultancy.uk
Consultants must not be caught sleeping at the wheel with AI-generated input

Big Four AI-powered eligibility systems for government programmes have caused widespread glitches, costly delays and systemic failures. In light of this, Leading Resolutions COO Jon Bance, argues it’s time for a new approach to public sector technology partnerships.

The use of AI by the world’s biggest names in consulting has been making waves in recent weeks – but for all the wrong reasons. Firms tasked with justifying top government policies through research and thought-leadership have come under scrutiny for unforced errors, when it comes to checking the output of their technology.

The most famous of these has seen Deloitte provide a partial refund to Australia’s federal government, after using generative AI to help produce a report yielded a number of critical failures. In the $440,000 report, the ‘researchers’ claimed that Australia’s Department of Employment and Workplace Relations was using IT infrastructure with a lack of “traceability” between the rules of the framework and the legislation behind it, as well as “system defects”, while also claiming an IT system was “driven by punitive assumptions of participant non-compliance”.

The Australian Financial Review subsequently reported that multiple errors had been found, including non-existent references and citations. Meanwhile, University of Sydney academic, Christopher Rudge, who first highlighted the errors, said the report contained “hallucinations” where AI models fill in gaps, and misinterpret data.

Jon Bance, chief operating officer at Leading Resolutions, a challenger consultancy, remarked, “In a story that sounds more like a sketch from Yes Minister than a government tech rollout, Deloitte has found itself issuing a £150,000 refund to the Australian government after its AI system began, shall we say, freestyling.”

Under the hood of every AI application are algorithms that churn through data in their own language, one based on a vocabulary of tokens. Tokens are tiny units of data that come from breaking down bigger chunks of information. AI models process tokens to learn the relationships between them, and generate content on the balance of probabilities – often euphemistically described as “guessing” or “reasoning” to come to conclusions. In this case, however, the guesses generated something Bance likened to “speculative fiction”, and which led Labor senator Deborah O’Neill to suggest Deloitte had “a human intelligence problem” for failing to spot.

Bance went on, “Now, before we chuckle too hard, there’s a serious lesson here: governance matters. It’s the difference between AI being a helpful assistant and it becoming your office’s most expensive fiction writer. Without proper oversight, even the most promising technology can veer off course, fast. So, whether you’re deploying AI, replatforming your CMS, or simply trying to get your chatbot to stop recommending cat food to dog owners, remember: governance isn’t bureaucracy, it’s your safety net. And if your AI starts quoting Shakespeare in a budget forecast, it might be time to check who’s minding the machine.”

More on: Leading Resolutions
United Kingdom
Company profile
Leading Resolutions is a United Kingdom partner of Consultancy.org
Partnership information »
Partnership information

Consultancy.org works with three partnership levels: Local, Regional and Global.

Leading Resolutions is a not a partner of Consultancy.org.

Upgrade or more information? Get in touch with our team for details.