Financial services institutions must adopt AI under principles-based supervision and risk management, rather than calling for entirely new AI-specific rulebooks. That’s the central theme that emerged from a summary of three roundtables on responsible AI adoption in the financial services sector published by the Bank of England.
The discussions were held in February by representatives of challenger banks and other large banking and insurance entities based in or doing business in the UK. The development is notable, because it shifts the conversation from the common theme of regulating AI with new rules to instead adapting existing laws and controls while clarifying supervisory expectations.
As the financial services race to operationalize AI, they naturally want to deploy faster. However, with speed and scale come increased risks and bottlenecks in the form of specialized AI talent shortage and rising difficulty proving compliance in ever-changing models. Traditional validation and transparency approaches, which involve understanding how AI inputs map to outputs, may not scale for generative and agentic systems. This is pushing risk management toward testing, monitoring, and outcome-based guardrails.
Procurement and contract negotiations with third-party AI providers, which organizations increasingly rely on, are another point of friction. The common fear is that vendors might not be sufficiently fluent in the unique compliance needs of regulated sectors like financial services, especially when it comes to cross-border rollout and the regulatory fragmentation it introduces.
Currently, the predominant approach to managing AI risk is transparency—in other words, proving an understanding of every internal mechanism. This is now shifting toward proving the ability to reliably control real-world behavior, which is important due to the nondeterministic nature of generative AI. This outcome-based monitoring approach involves extensive pre-deployment testing runs across key areas like performance, robustness, and (especially in the case of financial services) bias and fairness. After all, AI models can drift as they ingest more data, which means outputs can change unexpectedly over time.
Roundtable participants also expect growing demand for AI vendor due diligence toolkits, which will be especially important to AI fintechs given their role as third parties to financial services institutions. As such, fintechs will be expected to package due diligence as part of their core offerings. This includes, for example, contract clause libraries with AI-specific addenda, audit and access rights playbooks, and data-location mapping and sovereignty documentation. Such toolkits aren’t just extra paperwork, but increasingly vital deployment accelerators that will reduce deployment friction and compliance fears among regulated buyers like banks and insurers.
Especially when doing business across borders, fintechs can respond by shipping their products with modular compliance features, such as componentized risk management and regtech as a service. This will allow them to adapt their offerings for specific jurisdictions—without recreating them every time.