Majority of UK public don’t trust AI

With the technology regularly making headlines for its ‘hallucinations’, the majority of UK residents say they do not trust content generated by artificial intelligence. A new study from KPMG and the University of Melbourne also finds that less than a third of people in Britain have received training in AI use at work – suggesting that organisations may not be as invested in AI tools as their announcements have previously made it seem.
In spite of the uncertainty they face, with various headwinds including inflation and geo-political crises still lingering into 2025, businesses around the world remain broadly optimistic about their own outlooks. One of the key drivers of this in the last three years has been the reported potential of artificial intelligence – and particularly generative AI – which a mountain of corporate studies have suggested will buoy productivity and help find efficiency savings even amid tough times on a national or international level.
Reflecting this, in 2024, Eden McCallum spoke to 200 business leaders across the UK, and found that the importance of GenAI on their agendas had risen steadily. While 55% said it would be important over the coming three years in 2023, that figure hit 62% in the following year. In particular, bosses are drawn to GenAI by its ability to reduce costs, with 75% saying they expect it to help reduce costs and boost efficiency.
According to that same report, this was already translating to rapid adoption of GenAI, with 57% of leaders saying they were at least exploring use cases, while 58% said they were already piloting use cases for the technology. But one year later, a separate study from KPMG suggests the way firms are discussing AI may be very different from their actual experiences with it.
Working in collaboration with the University of Melbourne, the researchers polled a nationally representative sample of 1,029 people in the UK, between November 2024 and January 2025, including 617 workers. What they found was that only 27% of people in the UK had received AI education or training.
This is very low, when compared to the leading examples of Nigeria, Egypt and the UAE, where more than two thirds of respondents had received such training. However, it is not far from the general trend across most developed economies – with the likes of Germany, Japan, France and Canada all having seen organisations invest less in formally educating their staff on the technology.
The fact that 65% of UK workers still use AI at work suggests they are doing so on an ad-hoc basis, and if that is at the behest of their employers, it could be argued it is unlikely they are using tools the organisation is actually providing. With 44% saying they are “concerned about being left behind if they don’t use AI at work”, it may be that they are feeling pressured to demonstrate its use amid the hype – even if they don’t trust it.
This is something suggested by the idea that 65% of UK workers now say they use AI at work, while only 42% of the UK public are willing to trust the technology – and 78% are concerned by the negative outcomes the technology could have.
So, what might be giving these people the pause for thought when it comes to AI use? Despite the claims of the proponents of AI that the technology is progressing rapidly, the accuracy of AI generated content has stalled – with a number of notable studies suggesting it may actually have declined in recent months. A recent BBC study found that when researchers fed ChatGPT, Microsoft's Copilot, Google's Gemini and Perplexity AI BBC news stories, their summaries contained “significant issues” in 51% of cases. Meanwhile, the Tow Center for Digital Journalism also recently studied eight AI search engines – including the three consultants favour most – and found results were incorrect in 60% of cases.
At the same time, these concerns could be manifesting in the corporate suite, as represented by its unwillingness to invest directly in the technology (or at least in training). But the promises of ‘boosted productivity’ – more, faster (but not necessarily more accurate) output – could be proving a siren’s call that bosses cannot resist. Such an approach could be leaving firms open to major issues further down the line, however, and it is something which KPMG concludes necessitates a new mode of “responsible” AI production.
Leanne Allen, head of AI at KPMG UK, said, “For organisations already integrating AI, it's crucial to assess and manage associated risks while providing robust training on AI ethics. They need a long-term strategy that breaks ingrained habits and adopts new ways of working collaboratively with AI. While businesses considering AI adoption should look at embedding controls and assurances to manage risks effectively. Regardless of where an organisation is on its AI journey, monitoring controls and cultivating a culture that understands and encourages responsible AI use is key to maximising benefits and minimising risks.”