Blogpost

Internal auditing and the use of AI: Useful. Dangerous. Inevitable.

Internal auditing has evolved considerably in recent years – and AI will continue to change it. AI can relieve, deepen and sharpen it. But only if the data quality, methodology and processes are right and there is clear AI governance.

5
6 minutes reading time
interne Revision und ki

Included in this collection:

Open collection

Many audits are currently discussing tools. Copilot here, ChatGPT there, and on-premises as a sedative.

But wait: that’s the wrong order.

The crucial question is: What audit problem are we solving – and how do we ensure that results remain verifiable?

Because AI is not automatically good. It is one thing above all else: effective. And effectiveness amplifies everything. Good methodology gets better. Bad methodology is exposed more quickly.

This article is deliberately critical. Not because I am against AI. But so that internal audit does not squander its brand essence:trust through traceability.

The shift in the audit model: from random sampling and rear-view mirrors to signals and early warning

Random sampling is not wrong. It is just often blind to patterns.

And modern risks in banks are rarely ‘a single event’. They are process logic: authorisations, interfaces, exceptions, manual overrides, provider dependencies.

This is where AI can help – if the audit model is turned around properly:

  • Analysis provides hypotheses, not findings.
  • Findings need evidence that can be reproduced by a third party.
  • Monitoring provides signals, auditing decides on audit procedures.

Continuous auditing is not continuous auditing. It is an operating model: few but reliable signals – and targeted, short audits based on them. This reduces the workload. And it sharpens the eye. But only if the fundamentals are right.

Where AI really adds value in internal auditing

1. Planning and scoping: AI as radar, not autopilot

AI can cluster findings, incidents, KRI trends and process data and show summaries. Where are exceptions accumulating? Where are returns increasing? Where is manual rework tipping over?

This noticeably speeds up planning.

However, it also increases the risk of false security: ‘The AI said the issue is critical.’

No! It calculated patterns. The risk assessment remains human.

If data quality, definitions or data owners are unclear, planning only becomes faster – not better. Then ‘strange results’ are discussed later in the steering committee. And that costs trust.

2. Implementation: Three use cases that have a real auditing effect

Payment transactions and finance – anomalies as triggers:

Double payments, unusual booking chains, conspicuous master data changes, temporal clusters: AI does not find fraud. It provides signals that make an audit accurate.

Credit process and authorisations – patterns instead of individual cases:

In banks, risks often arise not in the model, but in the chain: Who is allowed to override? Where are exceptions approved? Which roles accumulate? Analytics can reveal deviation routes that are hardly noticeable with interviews and random sampling.

Outsourcing, providers and contracts – NLP as a gap finder:

When dealing with large volumes of contracts, natural language processing (NLP) can help to identify minimum content, deviations and missing clauses more quickly. The technical article by the German Institute for Internal Auditing (DIIR) emphasises precisely these opportunities – and also clearly identifies risks such as data protection, data security and dependencies when sensitive information ends up in the wrong environment.1

The point is: ‘finding’ documents is good. ‘Evaluating’ documents is a different league altogether.

3. Reporting and follow-up: GenAI can handle language – and that is precisely what makes it dangerous

GenAI is astonishingly powerful when it comes to reporting: structure, consistency, clean wording, action texts, follow-up lists.

At the same time, this is where the greatest reputational risk lies.

GenAI produces convincing sentences even when the evidence is thin. This feels efficient. But it is extremely dangerous because credibility is the currency of internal auditing.

When GenAI is used in reporting, strict rules are needed: a text generated by GenAI is initially only a suggestion for wording – not a verified audit result. Every sentence must be traceable. Prompting, versioning and review obligations are part of this – not as bureaucracy, but as protection.

4. The uncomfortable topics: governance beats tool selection

Anyone who introduces AI into internal auditing without first establishing governance will create new findings – in their own company.

Four questions determine whether AI strengthens or weakens internal auditing:

1. Traceability: Can I explain why a result is the way it is?

2. Reproducibility: Will the same result be produced tomorrow (logging, versioning)?

3. Data protection and confidentiality: Which data may be used? Which data may never be used? Who controls this?

4. Competence: Can auditors verify plausibility – or do they only consume output?

If any of these questions remain unanswered, then AI is not progress. It is merely acceleration in the fog.

Regulatory tailwind: AI becomes a subject of scrutiny, not just a tool

The regulatory framework is tightening – and this directly affects governance expectations.

The EU AI Act has been in force since 1 August 2024. Obligations are being phased in gradually; a large part will become widely applicable from 2 August 2026.2

In practical terms, this means that the use of AI is not ‘innovation’. It is governance – and therefore subject to scrutiny.

The profession itself is also raising the bar: the Global Internal Audit Standards have been in effect since 9 January 2025.3

And in the banking environment, there is also the ICT/security setting: the European Banking Authority published its Guidelines on ICT and Security Risk Management with an effective date of 20 May 2025.4

The bottom line is that there is less and less room for ‘let’s give it a try’. Not because innovation is prohibited, but because controllability is required.

A successful start: start small, secure thoroughly, scale cleanly

If an audit management team wants to get off to a serious start today, I don’t recommend a big bang approach. I recommend a pilot project that can be audited.

  1. A use case that is measurable (e.g. payment transaction anomalies or contract gaps in outsourcing).
  2. Data map and protection classes (source, quality, owner, access, storage).
  3. Document methodology (rules/models, thresholds, version, limits).
  4. Pilot as a real audit: data → analysis → evidence → findings → measures → follow-up.
  5. Quality assurance: dual control principle, logging, reproducibility, change management.
  6. Scaling only afterwards: and only where signal logic really works.

This is how audit effectiveness is created. And this is how board readiness is created.

KI-Nutzen - Whitepaper 2026

Latest white paper: Escaping the use case trap: Operational model for measurable AI benefits

Scalable AI benefits instead of individual projects:
Learn how your institution can make its AI transformation
structured, efficient and strategic.

Conclusion: AI is not an upgrade. It is a stress test.

AI can relieve, deepen and sharpen internal auditing. It can reveal patterns that were previously invisible.

But it does not forgive sloppy foundations. Bad data remains bad data. Lack of governance remains lack of governance. And beautifully worded texts are no substitute for evidence.

Those who use AI correctly make internal auditing less documentation-heavy – and significantly more of a challenger.

Those who use it incorrectly become faster, but also more vulnerable.

If you, as an audit manager or board member, want to use AI in internal auditing, it is worth starting off cautiously: one use case, one verifiable pilot, clear guidelines. The rest will then follow – or at least it should.

Sources
Thorsten Tewes

Thorsten Tewes

has many years of professional experience in auditing, organization, and compliance at banks and savings banks. At msg for banking, he is responsible for organization, corporate governance, and audit support. Together with his team in Management & Business Consulting, he develops comprehensive solutions for reorganizing structures, processes, and internal control systems within banks and savings banks. As part of co-sourcing, he supports representatives and internal auditors in carrying out audit procedures.

Write a comment

You must login to post a comment.