Three hidden risks of seeking legal advice from AI tools

06 June 2025 Consultancy.uk

As a growing number of businesses and people are turning to AI and Gen AI tools for legal advice, it is key that they keep in mind the risks associated with large language models, write Elouisa Crichton and Zeena Saleh from Dentons.

AI chat tools offer instant and often free of charge responses to legal queries, much to the gratification of those seeking a quick answer to a question that may take lawyers hours or even days to respond to, usually for a fee. While ‘AI law firms, which rely on user-driven, semi-automated claims handling systems for relatively low value litigation, have begun to appear in the UK, these are authorised and regulated legal services, whereas AI chat tools are not.

By taking matters into their own hands and using AI chat tools for legal advice – whether in commercial, regulatory, dispute resolution, or employment contexts – users can create serious risks for the individual or organisation. 

Although AI can undoubtedly be a useful tool, it cannot replace privileged, professional advice, and its use could expose an organisation in unexpected ways.

The advice is not legally privileged

Certain communications with qualified lawyers, and depending on the circumstances some other categories of people, can be protected by legal professional privilege. This means that, assuming it continues to maintain the necessary requirement of confidentiality, the content of those communications cannot be disclosed to the world at large. 

Conversations with AI tools do not carry this protection. If a person asks AI about dispute strategy, regulatory exposure, or employee grievances, those queries could be disclosable in litigation, including employment tribunals and regulatory or law enforcement proceedings.

The fact that the query was made, and the text of the query itself, may also appear in a Data Subject Access Request (DSAR) if an individual’s personal data is included.

This is because individuals have a right to request data held about themselves; so, if the AI chat was stored and included personal data, there are ways that individuals can obtain access to it.

For example, using AI to draft an employee disciplinary outcome or redundancy script could lead to that full exchange (not just the final product) being disclosable and reviewed in a tribunal, exposing the author's rationale and potentially weakening their case.

Possible scenarios could include a human author asking AI how to make a dismissal fair – exposing the fact that they had potentially pre-judged an outcome and were trying to make the circumstances fit; or asking AI if a certain rationale for dismissal would be discriminatory – which could suggest a discriminatory reason was on the mind of the decision maker.

Conversations with AI may be disclosable

AI chat tools create a record, much like other well-known online messaging tools, yet many organisations lack clear governance around how to treat these records. For example, it is not always clear whether these digital records are retained by the business or for how long, or under what circumstances the organisation may access individual communication logs.

Without a structured approach to digital communications, businesses may face challenges in a number of areas.

These include litigation holds, i.e. legal notices sent by organisations facing litigation to employees and other key individuals outlining their obligation to preserve relevant information – where it can be unclear whether AI-generated chat logs be included. 

If someone has raised a claim then the AI chat may contain lots of relevant evidence, and it would be problematic if the organisation that holds that data allowed it to be deleted in line with usual retention periods, in circumstances where a court case or enforcement matter was pending and the data should be disclosed as part of those proceedings. 

It can also be difficult to settle on retention periods, deciding how long AI usage data should be stored for, and whether it can be retrieved in any event.

Many organisations have fast deletion of these chats, but the picture is inconsistent and often organisations using this technology are not certain how long the exchanges are stored.

Seeking legal advice from AI tools comes with its benefits and risks

Seeking legal advice from AI tools comes with its benefits and risks

Users cannot rely on the quality of the advice

Even leaving legal privilege aside, AI-generated advice has significant practical limitations.

AI responses often fail to capture all legal, commercial and strategic risks; while the advice might look correct, it could be out of date (especially if the chat tool is a basic free version rather than a paid-for premium service), not fit for the purpose the advice was intended for (especially if the responses vary each time a question is asked), or otherwise deficient, exposing the user to commercial risk.

Unlike lawyers, AI chat tools lack accountability for the advice they give and the consequences of it, and are not covered by professional indemnity insurance, meaning that if incorrect advice is followed, there is no recourse or way of recovering the costs incurred as a result of errors made.

What should organisations do?

As AI becomes a pervasive and increasingly attractive tool for business, it is unrealistic to expect organisations to refrain from using it for various purposes, including legal – especially by individuals who may deploy the technology in secret.

To manage the risks, organisations should have a clear AI usage policy that defines when using AI is appropriate (for example, for summarising laws); when it should never be used (such as for legal disputes or sensitive HR matters); when data input must first be anonymised and/or be stripped of confidential information; and when the AI's output must be checked by humans.

It is also important that organisations update their data privacy and retention policies to explicitly include AI interactions in governance frameworks.

Organisations should consider disclosure risks and ensure AI use is factored into litigation and regulatory investigations, and definitively reserve AI for non-contentious queries, such as general legal research, rather than decision-making in high-risk scenarios.

AI can be a useful tool, but it is no substitute for privileged, strategic legal advice. When legal risks are high, talk to a lawyer, not a chatbot.

About the authors: Elouisa Crichton is a partner in the People, Reward and Mobility practice of Dentons, Zeena Saleh is a partner in the Global Compliance & Investigations practice.