AI governance and risk management from the perspective of banking and financial supervision
AI governance is a comprehensive framework that defines responsibilities for the use of artificial intelligence in a company and ensures the safe, ethical, transparent and legally compliant use of AI. The new BaFin guidance clearly classifies artificial intelligence as an ICT risk under DORA. With the help of the three-pillar model and robust AI governance, banks are able to meet strategic and operational requirements.
What the BaFin guidance on ICT risks in the use of AI means for banks
The topic of AI governance is becoming a focus for banks. With the publication of the BaFin guidance on ICT risks associated with the use of AI in December 2025, the supervisory authority has set a clear tone: artificial intelligence is no longer primarily considered an innovation or ethics issue, but explicitly part of ICT risk management under DORA. This finally puts AI at the heart of banking supervision.
This classification is of central importance for banks, as it significantly increases the requirements for governance, transparency and accountability. BaFin makes it clear that anyone who uses AI must understand, manage and control it like any other ICT system, while also taking into account the additional AI-specific risk dimensions.
Against this backdrop, a consistent three-pillar model of AI governance can be derived that is closely aligned with the expectations of the supervisory authority. Strategic anchoring, clear organisational embedding and controlled handling of AI throughout its entire life cycle are essential.
White Paper 2026Escaping the Use Case Trap:How Banks Can Truly Leverage AI
Learn how your institution can make its AI transformationstructured, efficient, and strategic.
AI as an ICT system: the decisive change in perspective
One of the most important clarifications in the BaFin guidance is the definition of AI systems as a combination of ICT assets and ICT infrastructure.
AI systems consist of models, software, data, hardware, networks and interfaces and are therefore, from a supervisory perspective, nothing more than a complex ICT system with special characteristics. Degrees of autonomy or learning ability may be technically relevant, but they are not a priority for supervision. What is more important is how an AI system is integrated into the existing ICT landscape and what risks arise from this.
This view has far-reaching consequences for governance. It means that AI should not be regulated in isolation or treated in separate ‘AI frameworks’. Instead, regulators expect AI systems to be fully integrated into the existing ICT risk management framework, including identification, protection, detection, response and recovery in accordance with DORA.
AI governance thus becomes a logical extension of existing control mechanisms, not a replacement for them.
The three-pillar model for robust AI governance
Pillar 1: AI strategy – why supervision cannot work without a target vision
From BaFin’s perspective, effective ICT risk management for AI begins at the strategic level. The guidance makes it clear that institutions should generally develop an AI strategy aligned with their overall, risk and DOR strategy and have it approved by the management body. This strategy is not an end in itself, but serves to consciously control the use of AI – especially where AI supports critical or important functions.
The strategic framework remains important because AI has a profound impact on business processes and can potentially have a significant effect on the stability, availability and integrity of data and systems. Without a clear vision, there is a risk of uncoordinated introduction of individual applications that promise short-term efficiency gains but create new dependencies and ICT risks in the long term.
BaFin explicitly links strategic responsibility to the management body. Not only does it bear formal ultimate responsibility, but it must also have sufficient knowledge to be able to assess AI-related risks. AI governance is not understood here as a delegable task, but as a management duty that requires specialist knowledge. This makes it clear that AI strategy is not a document for the filing cabinet, but a central management tool for the company’s management.
Pillar 2: Organisation – AI governance as part of the existing control framework
The second pillar of AI governance concerns organisational implementation. BaFin emphasises that governance and organisation are key building blocks for achieving digital operational resilience. For AI, this primarily means clear responsibilities within existing control structures.
Instead of creating new, parallel AI organisations, the supervisory authority expects them to be embedded in established governance models, in particular the three lines of defence model. Specialist departments and IT are responsible for the development, operation and use of AI systems. The second line of defence, such as risk management, compliance and information security, defines framework conditions, checks compliance and assesses risks. Finally, internal audit is responsible for independently reviewing the effectiveness of the entire AI governance framework.
This structure is crucial from a supervisory perspective because it ensures that AI is not operated ‘between the lines’. Particularly in the case of highly critical applications, it must be clear at all times who is responsible for decisions, controls and escalations. The guidance makes it clear that this clarity is not optional, but a prerequisite for the proper handling of ICT risks arising from AI.
Pillar 3: The AI lifecycle – why one-off approvals are not enough
A particularly striking point in the BaFin guidance is its consistent focus on the AI life cycle. The supervisory authority makes it clear that ICT risks do not primarily arise along the value chain, but rather through the permanent integration of an AI system into the ICT landscape. This is precisely why it is not enough to test or approve AI systems on a one-off basis.
Instead, institutions must consider the entire life cycle, from development and testing to ongoing operation and decommissioning. Different risk profiles arise in each phase. For example, training data can be manipulated, models can drift, cloud dependencies can intensify, and security gaps can only become apparent during operation. BaFin therefore expects AI systems to be continuously monitored, regularly reviewed and, if necessary, adapted or decommissioned.
The supervisory authority attaches particular importance to operation and decommissioning. From BaFin’s perspective, uncontrolled uninstallations, outdated model versions or insufficiently secured training data pose significant ICT risks. Governance therefore does not end with go-live, but explicitly includes exit strategies, uninstallation and the handling of historical data and models.
EU AI Act and DORA: two regimes, one governance approach
The guidance implicitly clarifies how the EU AI Act and DORA relate to each other. While the EU AI Act primarily addresses purpose, risk classification, transparency and human oversight, DORA focuses on resilience, operational safety and ICT risks. For banks, this does not mean double regulation, but rather complementary requirements.
Effective AI governance must therefore bring both perspectives together: it must ensure that AI applications are legally permissible and ethically acceptable, while at the same time being technically stable, secure and controllable. The three-pillar model offers a suitable framework for this because it interlinks strategic, organisational and operational aspects, thus doing justice to both regimes.
Conclusion: BaFin makes AI governance a management task
With its new guidance, BaFin has made it unmistakably clear that the use of AI in financial companies is no longer a grey area. AI is part of ICT risk management, is subject to DORA and must be managed accordingly. For banks, this means a clear mandate: AI governance is not an innovation project, but a permanent operating state that must be strategically managed, organisationally anchored and operationally implemented.
Institutions that take this requirement seriously gain more than regulatory certainty. They create transparency, increase their digital resilience and lay the foundation for the responsible and scalable use of AI.




You must login to post a comment.