Bottom-up approach – Innovative AI use cases from specialist departments
Bottom-up approach instead of blockades: With the bottom-up approach, employees independently identify AI use cases and drive the development of promising AI solutions. This increases motivation and commitment, promotes innovative ideas and creates greater acceptance for AI within the company. From proof-of-concepts (PoC) to minimum viable product (MVP) – practical, fast and compliant with regulations for effective AI use cases in everyday life.
Included in this collection:
Open collection
Conversational AI: The new customer advisor is called AI

EU AI Regulation: GPAI rules since 2 August 2025

Hybrid approach – speed and AI compliance in harmony

Top-down approach – AI governance as a mandatory programme

Crowdtesting for AI systems in banks: Realistic tests for chatbots, voice systems and decision models

From prompt to resilient governance: How GPT-5 is truly transforming banks

When IT concepts become a risk – Why reviews and project management need to be rethought

AI and ethics: the balance between progress and responsibility

AI-supported credit processes are no longer a utopia
Why AI use cases drive acceptance
The bottom-up approach to AI governance focuses on business value as the central driver. In contrast to the top-down approach, in which governance structures and specifications are set from above, innovation and governance are developed together in an evolutionary and practical manner.
This approach relies on harnessing the creativity and expertise of the specialist departments: Employees independently identify promising use cases and drive the development of AI solutions. This increases motivation and commitment, promotes innovative ideas and creates greater acceptance of AI within the company.
In practice, the bottom-up approach begins with the identification and prioritization of specific use cases – often based on business opportunities, efficiency gains or new service offerings. The close involvement of the operational teams ensures that the solutions developed are actually tailored to real needs in day-to-day business.
Idea generation is typically followed by the proof-of-concept (PoC) phase, in which technical feasibility and benefits are tested on a small scale. If the PoC leads to promising results, a minimum viable product (MVP) is developed, which is used under real conditions and iteratively improved.
A central aspect of the bottom-up model is the strategic decision between “make or buy”: companies must evaluate whether they want to develop the required AI skills and solutions themselves or purchase external products and services. The decision is often based on factors such as internal resources, time expenditure, protection of in-house expertise and regulatory requirements.
Minimum standards, maximum progress
This use case-centered life cycle creates a governance model that evolves flexibly and in line with requirements. At the same time, the specialist departments remain close to productive operations, recognize risks or the need for adaptation at an early stage and can react pragmatically to new requirements.
The bottom-up approach is therefore particularly valuable in innovation-driven companies and in fields with rapid market changes, as it combines speed, creativity and practical relevance.
In the bottom-up approach to AI governance, control does not arise from abstract specifications, but develops organically from practical experience. Every successful use case provides valuable insights that are gradually transferred into reusable structures: Standardized use case profiles or model cards are created from individual project documentation. Ad hoc methods first become internal guidelines and then – if required repeatedly – binding policies. This incremental approach resembles an evolutionary construction kit in which standards are not comprehensively defined in advance, but are established where operational experience and real challenges prove their necessity.
The result is a governance model with a high level of acceptance, as it was not imposed from above, but developed based on specific needs.
The strategic advantage of this approach lies in the combination of agility and practical relevance. Governance is not perceived as an obstructive preliminary step, but as a learning companion to the innovation process. Companies can quickly deliver initial added value, as resources are deployed specifically where they are immediately needed – without the effort of creating structures in advance for every eventuality. The continuous involvement of the specialist departments ensures that feedback from real-life use flows directly into the resulting standards, keeping them practical and tailored to requirements.
At the same time, this approach promotes an innovation-friendly corporate culture: employees experience that AI projects do not get stuck in the concept phase, but deliver concrete results in everyday working life. In this way, the bottom-up approach combines speed, cost awareness and organizational learning to create a governance model that grows dynamically with the company’s requirements.
Managing risks without a patchwork quilt
A key risk factor in the implementation of bottom-up AI governance is the so-called “governance patchwork quilt”. Without uniform standards and clearly defined processes, isolated initiatives and different approaches in individual departments can lead to inconsistencies and weaknesses in overall control. This harbors hidden compliance risks, especially if legal requirements or regulatory specifications are not taken into account consistently and systematically.
In addition, scaling problems make implementation more difficult: while smaller institutions can often operate well with the bottom-up approach, larger organizations often encounter challenges in the uniform orchestration and consolidation of individual initiatives. This makes risk consolidation more complex, as risk exposures are distributed across different use cases and business areas and can easily be overlooked without central monitoring.
The bottom-up approach is therefore particularly suitable for small and medium-sized companies and for organizations with a dynamic and innovation-driven corporate culture.
However, this model quickly reaches its limits in large organizations with complex regulatory requirements and extensive structures. There is a risk of a patchwork that is difficult to control and can jeopardize legal certainty and the trust of the supervisory authorities.
A careful comparison between company size, complexity and regulatory density with the chosen governance model is therefore essential. Although the bottom-up approach can provide valuable impetus for the implementation of AI projects – from proof-of-concepts and minimum viable products to productive use in an “AI factory” – it should be flanked by supplementary top-down structures in larger contexts.
This differentiated view takes into account the reality that smaller institutions can successfully scale AI innovations with bottom-up approaches, while larger organizations rely on stronger regulatory and organizational governance mechanisms to manage risk and ensure compliance. Awareness and training opportunities, such as targeted awareness sessions or training on AI data security, can help to minimize compliance risks and promote acceptance of governance structures in the long term.
Conclusion
Overall, it can be seen that bottom-up governance can be a powerful driver of innovation – however, its suitability and effectiveness depend largely on the size of the company, the complexity of the AI applications and the regulatory framework. A clear, common framework and supplementary control mechanisms remain essential to ensure both sustainable business success and regulatory compliance.




You must login to post a comment.