Navigating Divergent AI Regulation – Can Standards Bring Clarity?

Artificial intelligence is transforming financial services, from automating credit assessments to streamlining compliance processes. But while AI capabilities are developing at pace, regulatory frameworks are struggling to keep up. Nowhere is this more apparent than in the contrasting approaches taken by the European Union and the United Kingdom. The EU has opted for a rules-heavy, product safety model under the recently published EU AI Act, while the UK is championing a principles-based, sector-led approach.
Yet despite this regulatory divergence, the professionals at the heart of financial services – compliance experts, technologists, and risk specialists – are actively working toward the development of international technical standards and shared governance frameworks, laying the groundwork for operational harmonization that transcends legal jurisdictions.
This article draws on insights from the FS Club webinar on AI regulation, sponsored by AIQI, and featuring: Adam Leon Smith, Chair, AIQI Consortium – David Doyle, Board Member, Kangaroo Group (EU Parliament) and EU Policy Director, The Genesis Initiative – Host: Mike Wardle, Chief Executive Officer, Z/Yen Group. The article explores the divergent EU and UK approaches to AI regulation and outlines how global standards can encourage harmonization.
The EU AI Act: Risk-Based, Rule-Driven, and Extra-Territorial
The EU’s AI Act, (the ACT) is widely considered the most ambitious and comprehensive AI regulation to date. It introduces a risk-based classification system that determines the regulatory obligations based on the intended use and potential societal impact of an AI system.
While the Act does not yet include financial services-specific rules, use cases such as credit risk modelling, robo-advisory tools, and algorithmic trading all fall within its “high-risk” scope. The European Commission has launched consultations to explore this further, and additional guidance from the European Supervisory Authorities can be expected over the coming months.
Under the EU AI Act, providers of high-risk AI systems – including both original model creators and third-party firms that integrate those models into their products – are required to perform pre-market self-certification before deployment. This involves conducting a comprehensive conformity assessment to ensure the system meets the Act’s stringent requirements on risk management, data governance, transparency, robustness, and human oversight.
Importantly, liability extends beyond the original developer: third-party firms that customize or apply general-purpose AI in ways that elevate risk are also subject to the same obligations. Whether an AI system is developed in-house or embedded from an external provider, the entity placing it on the EU market must demonstrate compliance before the system goes live ensuring that high-risk AI is evaluated not just for its technical capabilities, but also for its intended use and potential impact.
The Act also has broad extra-territorial reach. Any company offering AI systems in the EU, regardless of its country of origin, must comply with the Act and establish a legal presence within the EU. This ensures that non-EU providers cannot bypass compliance simply by being based abroad.
Another defining feature of EU regulation is the publication of regulatory technical standards (RTS). Under the EU framework, providers who comply with officially recognized RTS gain a “presumption of conformity” – a legal safeguard that their systems meet the AI Act’s requirements. This creates a strong incentive for firms to adopt structured governance and risk management systems, including standards such as ISO/IEC 42001 for AI management and others addressing transparency, explainability, and data bias.
The UK Approach: Principles, Proportionality, and Sector Focus
In contrast, the UK has taken a flexible and innovation-friendly approach to AI regulation. Rather than introducing a standalone law, the government has issued a set of five cross-sector principles to guide regulators: safety, transparency, fairness, accountability, and contestability. These principles are to be interpreted and enforced by existing regulators, such as the Financial Conduct Authority, Bank of England, and Information Commissioner’s Office, within their respective mandates.
The emphasis is on proportionality and outcome-based regulation. AI systems are not judged solely on their technical design but on their real-world impact – particularly on consumers and markets. This allows regulators to tailor their oversight to sector-specific risks without stifling innovation through rigid requirements.
A distinguishing feature of the UK approach is its emphasis on transparency and explainability. Unlike the EU’s focus on general system transparency, UK regulators are pushing for “local explainability” – the ability to explain individual AI decisions. This aligns closely with financial services obligations under the Consumer Duty and data protection laws, especially in contexts such as lending, claims processing, and trading decisions.
The UK also benefits from mature governance frameworks already in place within the financial sector. Regulatory constructs such as the Senior Managers & Certification Regime, model risk management programs, and operational resilience rules are being adapted to encompass AI governance without needing to reinvent the wheel.
While the UK does not impose a central pre-market conformity assessment for AI, firms are still expected to demonstrate that they have implemented appropriate controls, governance mechanisms, and oversight. Increasing reliance on third-party and foundation model providers is also prompting regulators to scrutinize outsourcing arrangements and third-party risk management more closely.
Harmonisation via Global Technical Standards
Despite divergent legal frameworks, a growing body of international technical standards is acting as a unifying force. Organizations such as the ISO/IEC and regional standards bodies are working alongside industry consortia to establish a common language for responsible AI development. These standards include:
- ISO/IEC 42001 – AI Governance: A comprehensive international standard that provides a structured framework for the governance and management of AI systems. It’s analogous to ISO/IEC 27001 (for information security) and ISO/IEC 9001 (for quality management), and covers policies, responsibilities, risk assessment, and continual improvement for organizations deploying AI.
- ISO/IEC 6254 – Explainability of AI: This standard focuses on interpretability and explainability of AI systems, detailing approaches and methodologies for making complex AI decisions understandable to humans. It addresses both general system explainability and case-specific decision explanation.
- ISO/IEC 5259 Series – Data Quality and Management for AI: A series of standards that establish best practices for managing data within AI systems. This includes data collection, processing, representativeness, and bias mitigation. It is highly relevant for ensuring fairness, especially in regulated sectors like financial services.
- ISO/IEC 27000-series (implicitly referenced) – Information Security Standards: While not explicitly named, ISO 27001 is referenced as the analogue to ISO/IEC 42001. It provides a risk-based approach to information security management and is foundational in regulated environments.
- UK AI Cybersecurity Code of Practice (in development as an ETSI standard): Originally developed by the UK’s National Cyber Security Centre (NCSC), this code outlines best practices for the secure design, deployment, and maintenance of AI systems. It is being adopted and formalized as a consensus-based standard under ETSI (European Telecommunications Standards Institute).
The EU AI Act’s presumption of conformity makes these standards especially valuable, but their benefits extend beyond Europe. UK regulators, while not mandating specific standards, view adherence to internationally recognized frameworks as evidence of best practice. For financial institutions operating across borders, aligning with such standards offers a pragmatic way to meet overlapping obligations while reducing duplication.
Standards also provide the scaffolding for building internal AI assurance capabilities. By adopting frameworks that include documentation, testing protocols, bias detection methods, and human oversight structures, firms can operationalize AI compliance in a way that is auditable, repeatable, and defensible.
Practical Implications for Financial Services Firms
Whilst the regulatory divergence between the EU and UK on AI rules presents practical challenges, firms have an opportunity to streamline and strengthen AI enabled compliance operations by adopting a holistic approach that includes:
- Classify and inventory AI use cases: Understand which systems fall under “high-risk” categories in the EU and which principles apply under UK regulation. Common areas of focus include credit scoring, fraud detection, algorithmic trading, and chatbots.
- Adopt structured AI governance: Implement an enterprise-wide framework such as ISO/IEC 42001 to manage AI risks, assign accountability, and ensure consistency across departments and jurisdictions.
- Enhance transparency and explainability: Invest in methods and tooling to explain AI outputs, particularly in customer-facing or high-impact contexts. This is essential under both EU and UK expectations.
- Embed human oversight and contestability: Ensure there is meaningful human control over automated decisions, especially where customer rights or financial outcomes are involved. Design processes for appeals, overrides, and redress.
- Strengthen third-party risk management: Evaluate AI vendors and service providers for compliance readiness. Ensure contracts reflect data governance, audit rights, and obligations to assist in regulatory inquiries.
- Prepare for EU registration and conformity assessments: For high-risk AI systems entering the EU market, ensure the necessary compliance infrastructure is in place – including legal representation, documentation, and testing.
By aligning compliance strategies to international standards and focusing on core governance principles, firms can reduce complexity while strengthening trust and resilience in their AI systems.
Enforcement Outlook: Building Capacity and Learning in Real Time
While the EU AI Act includes clear enforcement mechanisms – such as substantial fines for non-compliance – it will take time for a robust supervisory infrastructure to mature. Member States must designate national market surveillance authorities, and the new EU AI Office is still ramping up operations. In the early years, enforcement may focus on education, guidance, and flagrant violations.
In the UK, enforcement will be decentralized and driven by existing regulators under current laws. AI-related failures could trigger actions under data protection, consumer protection, or financial conduct rules, even in the absence of AI-specific statutes. This creates a landscape where oversight is real, even if not always branded as “AI regulation.”
Firms should expect increasing supervisory interest in AI over the next two years, particularly in high-impact areas. Voluntary transparency, proactive engagement with regulators, and participation in sandbox initiatives will help firms demonstrate good faith and preparedness.
Alignment of Underlying Goals
While there are clear political and legal differences in AI regulation between the EU and UK, the underlying goals are aligned: to ensure AI is safe, fair, transparent, and accountable. For compliance professionals, the most effective path forward is not to build siloed programs for each regime, but to invest in harmonized, standards-driven governance that meets the expectations of both.
By embedding international standards, fostering cross-functional collaboration, and focusing on explainable, well-documented, and auditable AI systems, firms can future-proof their compliance posture. In a rapidly evolving regulatory landscape, strategic alignment will be the hallmark of resilient and responsible AI adoption.
Subscribe to our newsletter