Blogpost

AI governance and risk management from the perspective of banking and financial supervision

AI governance is a comprehensive framework that defines responsibilities for the use of artificial intelligence in a company and ensures the safe, ethical, transparent and legally compliant use of AI. The new BaFin guidance clearly classifies artificial intelligence as an ICT risk under DORA. With the help of the three-pillar model and robust AI governance, banks are able to meet strategic and operational requirements.

58
3 minutes reading time
Hallway

What the BaFin guidance on ICT risks in the use of AI means for banks

The topic of AI governance is becoming a focus for banks. With the publication of the BaFin guidance on ICT risks associated with the use of AI in December 2025, the supervisory authority has set a clear tone: artificial intelligence is no longer primarily considered an innovation or ethics issue, but explicitly part of ICT risk management under DORA. This finally puts AI at the heart of banking supervision.

This classification is of central importance for banks, as it significantly increases the requirements for governance, transparency and accountability. BaFin makes it clear that anyone who uses AI must understand, manage and control it like any other ICT system, while also taking into account the additional AI-specific risk dimensions.

Against this backdrop, a consistent three-pillar model of AI governance can be derived that is closely aligned with the expectations of the supervisory authority. Strategic anchoring, clear organisational embedding and controlled handling of AI throughout its entire life cycle are essential.

KI-Nutzen - Whitepaper 2026

White Paper 2026
Escaping the Use Case Trap:
How Banks Can Truly Leverage AI

Learn how your institution can make its AI transformation
structured, efficient, and strategic.

AI as an ICT system: the decisive change in perspective

One of the most important clarifications in the BaFin guidance is the definition of AI systems as a combination of ICT assets and ICT infrastructure.

AI systems consist of models, software, data, hardware, networks and interfaces and are therefore, from a supervisory perspective, nothing more than a complex ICT system with special characteristics. Degrees of autonomy or learning ability may be technically relevant, but they are not a priority for supervision. What is more important is how an AI system is integrated into the existing ICT landscape and what risks arise from this.

This view has far-reaching consequences for governance. It means that AI should not be regulated in isolation or treated in separate ‘AI frameworks’. Instead, regulators expect AI systems to be fully integrated into the existing ICT risk management framework, including identification, protection, detection, response and recovery in accordance with DORA.

AI governance thus becomes a logical extension of existing control mechanisms, not a replacement for them.

The three-pillar model for robust AI governance

 

EU AI Act and DORA: two regimes, one governance approach

The guidance implicitly clarifies how the EU AI Act and DORA relate to each other. While the EU AI Act primarily addresses purpose, risk classification, transparency and human oversight, DORA focuses on resilience, operational safety and ICT risks. For banks, this does not mean double regulation, but rather complementary requirements.

Effective AI governance must therefore bring both perspectives together: it must ensure that AI applications are legally permissible and ethically acceptable, while at the same time being technically stable, secure and controllable. The three-pillar model offers a suitable framework for this because it interlinks strategic, organisational and operational aspects, thus doing justice to both regimes.

Conclusion: BaFin makes AI governance a management task

With its new guidance, BaFin has made it unmistakably clear that the use of AI in financial companies is no longer a grey area. AI is part of ICT risk management, is subject to DORA and must be managed accordingly. For banks, this means a clear mandate: AI governance is not an innovation project, but a permanent operating state that must be strategically managed, organisationally anchored and operationally implemented.

Institutions that take this requirement seriously gain more than regulatory certainty. They create transparency, increase their digital resilience and lay the foundation for the responsible and scalable use of AI.

Lisa Weinert

Lisa Weinert

is a manager in the Business Development AI division at msg for banking. She advises banks and financial service providers on regulatory and procedural issues relating to the use of artificial intelligence. Her work focuses on the secure and compliant integration of innovative technologies into existing structures. It is particularly important to her that automation and AI are not seen as an end in themselves, but as concrete levers for process optimisation and efficiency gains in the banking environment.

Write a comment

You must login to post a comment.