Why traditional due diligence models are struggling to keep up with scale
AI is rapidly making its way into the world of due diligence – including in the risks landscape. Toby Thomas, Director at S-RM, outlines the significant potential the technology offers – and explains why those who succeed will be the ones who combine AI’s power with the human judgement on which effective due diligence ultimately depends.
The last year has brought about the integration of AI across business, and the due diligence and intelligence world is no different. At S-RM, we have observed a variety of approaches from clients. This ranges from technology firms seeking to integrate Gen AI-driven search into their compliance offerings through to regulated financial institutions who are more cautious around innovation, but nonetheless eager to understand what their options are around AI.
The volume of available information has grown exponentially, much of it unverified, politicised or generated by AI, making it harder to distinguish genuine risk from background noise. At the same time, regulatory expectations remain as high as before. What constitutes “enhanced due diligence” in an AML context is often open to interpretation, but the underlying requirement is clear – regulators expect a detailed understanding of a counterparty, not simple box ticking.
Traditional compliance databases remain an important part of the due diligence toolkit. However, they are inherently static and frequently generate large volumes of false positives requiring exhaustive review.
For many organisations, the real challenge is not access to information, but how to process it efficiently while remaining confident that their approach would withstand regulatory scrutiny.
The limits of screening alone
Screening level checks remain effective at identifying very obvious red flags, such as sanctions or watchlist matches. However, they are rarely sufficient in higher risk scenarios where reputational, regulatory or operational exposure is more complex.
Enhanced due diligence requires judgement, context and an understanding of nuance. Applying full investigative processes to every case is neither efficient nor realistic. Yet, relying solely on automated screening tools leaves material risks unaddressed. This tension sits at the heart of modern due diligence. Teams are triaging increasing volumes of date, under time pressure, with increasingly regulatory scrutiny.
How AI is changing the equation
AI is increasingly being used to help resolve this tension. Its primary value lies in speed, consistency and scale.
Used appropriately, AI can support early-stage triage by resolving entity confusion, distinguishing between similarly named individuals, surfacing relevant adverse media and consolidating corporate ownership information from multiple sources. This allows due diligence teams to prioritise cases more effectively and reduce the time spent on low value manual searches.
Generative AI has proved transformative in areas such as coding and data processing. In due diligence, its impact is most evident in managing scale. However, it is being cautiously adopted. Recent research in the private equity sector found that only a small number – around 7% of leading firms have fully embedded AI across their workflows. This hesitation reflects industry concerns around governance and regulatory exposure.
Crucially, this does not mean that AI replaces investigative judgement. Many of the most significant risks are qualitative and context dependent. Assessing the credibility of a source, understanding local political dynamics, or interpreting information across non-Latin alphabets remains a human task.

Why human oversight still matters
Recent examples of AI misuse, including in public sector contexts, have highlighted the risks of over reliance on automated outputs. Without clear governance, AI can scale inaccuracies as easily as efficiencies, particularly when outputs are treated as definitive rather than indicative.
For this reason, the most effective approaches treat AI as an enabler rather than a decision maker. Human oversight remains essential, particularly where enhanced due diligence is required and regulatory expectations are high.
Upstream governance and clear guardrails are equally important. Clear internal review processes and accountability are critical to maintaining trust in AI supported outputs. As regulators are increasingly scrutinising what qualifies as enhanced due diligence, organisations must be able to demonstrate not only that checks were conducted, but how conclusions have been reached.
Moving towards proportionate due diligence
As AI becomes more embedded, many organisations are adapting their due diligence frameworks to be more risk based and proportionate.
In practice, this involves using AI supported analysis to prioritise cases early in the process. Lower risk issues can be cleared more efficiently, while higher risk subjects are escalated for deeper investigation. This allows organisations to allocate resources more effectively and focus expertise where it adds the most value.
Some organisations are also introducing intermediate stages between basic screening and full enhanced due diligence. These stages provide additional scrutiny without immediately triggering resource intensive reports, helping teams manage volume while maintaining regulatory defensibility.
AI-supported due diligence in practice
Many organisations now use AI enabled tools to support their due diligence triage, helping to identify potential risk indicators across jurisdictions, languages and sources.
We take a similar position. At S-RM, we use AI supported tools, including solutions such as Perspecta Diligence, to assist with early-stage analysis such as entity resolution, adverse media identification and initial ownership mapping.
Crucially, these outputs are not treated as conclusions. They are reviewed by our global team of corporate intelligence researchers, who then apply their regional expertise, linguistic capability and sector knowledge to assess reliability and context. This human review is particularly important in markets where information is sparse, politically influenced or difficult to interpret.
For us, AI helps focus human effort where risk is highest, rather than replacing judgement.
A cautious path forward
Despite rapid advances in AI capability, it is being adopted with caution. Organisations continue to balance pressure from leadership to innovate with the need to manage regulatory and reputational risk. As a result, approaches to AI supported due diligence still vary widely.
Over time, this divergence is likely to narrow. As tools improve, governance frameworks mature and best practice becomes clearer, approaches are expected to converge. Those that succeed will be the ones that use AI to enhance speed, focus and consistency, while preserving the human judgement that effective due diligence ultimately depends on.

