Consumers turn to AI in UK’s information vacuum
In a vacuum in which access to public service and health advice have been cut back, UK consumers are increasingly turning to AI for support with decision-making. A poll from IBM suggested convenience and speed were the most important factors in this – though the majority of consumers are still not confident in reaching decisions based on AI input alone.
The study saw Big Blue commission Censuswide to poll 2,257 people across the UK and Ireland over the age of 18. Taking place in September 2025, the polling focused on consumers who were explicitly aware when they were interacting with AI.
The results suggest that despite on-going concerns around privacy and accuracy relating to the technology, a growing number of people feel increasingly inclined to at least check personal matters with ChatGPT. A 74% chunk of those surveyed said they were now comfortable with AI playing a role in their decision-making, ranging from the generation of personalised suggestions, to making household financial selections on their behalf.
With public sector services aimed at supporting citizens having been slashed back in recent years, consulting knowledgeable sources about matters of personal importance has become increasingly difficult in the UK – and across much of Europe and North America. In that void, 40% of consumers polled said the convenience and speed of AI led them to ask it questions, while 35% said the chance to access 24/7 support was the biggest influence on their use of it. A further 34% said security and privacy of the discussion was important to them – though other questions suggest that this is still not seen as a pillar of AI interaction.
Rising trust?
According to IBM, the data shows trust is rising in AI – citing 79% of those surveyed saying they trusted interactive AI experiences like chatbots to deliver reliable results. A further 72% of those polled reported enjoying their use. And while the survey’s pool of opinions might have excluded many consumers who don’t trust the technology – as they are less likely to be interacting with it in the first place – IBM believes that points to a maturing market for AI services in the UK.
Leon Butler, IBM’s CEO for the UK and Ireland, commented, “Our study shows that users are quickly climbing the curve of customer comfort when it comes to AI assistants. AI is increasingly influencing the way we live, work, shop and innovate and this is a prime moment for UKI enterprise to scale agentic AI to deliver customer-led growth.”
To that end, IBM also said that 48% of respondents were comfortable with the idea of using an AI assistant for decisions such as enrolling in a paid service, with 57% comfortable with AI making everyday choices and 62% with using AI for personalised suggestions. Butler claimed that to “seize this opportunity”, organisations should be deploying “production-ready agents” and building “AI and digital skills” in their workforce.
As one of the world’s largest technology and business services firms, IBM is itself heavily invested in the proliferation of AI technologies in every facet of modern life. The firm hosts a centre of excellence for generative AI in the UK; has rolled out a new Advantage line of services to help clients deploy the technology; and has even partnered with popular sporting events like Wimbledon to install new tools aimed at boosting fan engagement.
As enthusiastic as IBM might be about the technology, however, UK consumers – even the ones who profess to trust the technology’s accuracy – may not be quite as fervent as the research’s headline figures suggest. Retaining transparency and control when AI makes decision was valued by 63% of respondents, while 44% were wary of privacy and security implications of sharing their data – and while Butler noted getting the most from the technology depended on addressing this, previous form suggests AI leaders have rarely actively prioritised these factors.
At the same time, while close to three-quarters of respondents said they would be happy to have AI contribute to their decision-making, 54% did not intimate that they would be confident making decisions based solely on AI-generated information. This indicates that even though scaling back of public points of information, such as the Citizens Advice Bureau or the NHS, across 15 years of government-driven austerity, people are still unwilling to lean on the quick and cheap world of AI for a substitute to human expertise.
Dangers of losing touch
That may be just as well. Recent months have recently seen a worrying spike in headline-grabbing cases, where AI ‘advice’ has had devastating impacts on human lives. In the US, a recent study from the Center for Countering Digital Hate revealed ChatGPT gave teens dangerous advice about drugs, alcohol and suicide. Meanwhile, a 60-year-old man accidentally poisoned himself and entered a psychotic state after ChatGPT suggested he eliminate salt, or sodium chloride, from his diet and replace it with sodium bromide, a toxin used to treat wastewater, among other things.
An article in Futurism in September covered a growing list of couples, who say their relationships have been destroyed by advice given by ChatGPT – having treated it as a cheap alternative to marriage counselling. As a piece of technology designed in part to foster continued engagement – with a view to monetisation through advertising in the future – critics allege that ChatGPT ultimately provides an unchallenging feedback loop for each user, which rather than providing the kind of objective analysis it is being used for, just amplifies whatever users put in.
Another recent case documented by the BBC saw a man in Scotland become convinced that he was about to become a multi-millionaire, after turning to ChatGPT to help him prepare a wrongful dismissal case against a former employer. In a feedback loop of his own, the man was eventually told he could “get a big payout”, and eventually said his experience was so dramatic that a book and a movie about it would make him over £5 million. He added, “it never pushed back on anything I was saying”.
In this case, he would have been one of the 46% of consumers cited by IBM who would have been confident in using the tool for advice on its own – and provides a cautionary tale to that end. While the tool did advise him to talk to Citizens Advice, he added he was so certain that the chatbot had already given him everything he needed to know, he cancelled the appointment.
Having decided that screenshots of his chats were proof enough, the man – who the BBC said was suffering additional mental health problems – “eventually had a full breakdown”. Taking medication eventually led to him realising that in his words, he had "lost touch with reality".
Speaking after the events, he warned, “Go and check. Talk to actual people, a therapist or a family member or anything. Just talk to real people. Keep yourself grounded in reality.”
