ICC AI Governance Policy Paper 2025: Global Standards for Artificial Intelligence Regulation

📌 Key Takeaways

  • Fragmentation Risk: Divergent national AI regulations threaten global interoperability, raise compliance costs, and disproportionately burden SMEs across all jurisdictions
  • Standards as Bridges: International standards like ISO/IEC 42001 and ISO/IEC 23894 can provide common compliance pathways across different regulatory regimes
  • Eight Recommendations: The ICC proposes strategic alignment, local participation, industry-driven standards, multistakeholder collaboration, and procurement integration
  • EU-Global Tension: CEN-CENELEC standards developed for the EU AI Act risk diverging from existing ISO/IEC frameworks, creating dual compliance burdens
  • SME Imperative: International standards lower barriers to entry by providing scalable governance frameworks that replace costly bespoke compliance systems

The Regulatory Fragmentation Crisis in Global AI Governance

The International Chamber of Commerce, representing more than 45 million companies across over 170 countries, published its definitive policy paper on AI governance and standards in July 2025 — a document that arrives at a critical inflection point for global artificial intelligence regulation. As nations and regional blocs independently construct their own AI laws, policies, and technical requirements, the ICC warns that a dangerous pattern of regulatory fragmentation is emerging that could fundamentally undermine the technology’s potential to drive economic growth and social progress.

The central thesis of the ICC’s analysis is straightforward but urgent: duplicative and potentially conflicting standards and compliance schemes raise the costs of doing business in an increasingly globalised economy. When the European Union mandates one set of technical requirements through the EU AI Act, while the United States follows a different approach through NIST frameworks and executive orders, and dozens of other jurisdictions develop their own unique requirements, the cumulative effect creates a compliance labyrinth that hinders AI deployment across borders.

This fragmentation is not merely a theoretical concern. Companies developing AI systems that operate across multiple jurisdictions already face the prospect of meeting overlapping but slightly different technical requirements from different regulatory regimes. The challenge is particularly acute for AI supply chains, where systems are frequently built from components produced by different actors in different jurisdictions. Each node in these global supply chains may be subject to distinct governance requirements, creating compounding compliance complexity that threatens to slow the very innovation that AI regulation purports to enable.

For organizations seeking to understand the full scope of this regulatory landscape, the ICC’s paper provides an essential analytical framework. The document identifies specific mechanisms through which fragmentation occurs — from divergent terminology and risk classification methodologies to conflicting standards on data sharing and integrity — and proposes concrete solutions grounded in the international standards ecosystem. For those looking to engage with the complete interactive analysis, Libertify’s experience transforms the dense policy content into an accessible format.

Standards as a Bridge Between Divergent Legal Regimes

Perhaps the most important contribution of the ICC policy paper is its articulation of how international, market-driven standards can serve as bridging mechanisms between different legal and regulatory approaches. Even where legal frameworks differ fundamentally between countries and regions — as they increasingly do in the AI domain — standards can provide a common technical language and consistent implementation pathways.

The ICC draws a critical distinction that often gets lost in policy debates: standards explain how to meet regulatory requirements and facilitate their implementation, but they cannot extend regulation beyond what legislators have enacted. Standards are not substitutes for the role of governments in setting policy goals and legal requirements. Rather, they translate those high-level objectives into practical, implementable technical specifications that businesses can adopt.

This bridging function works because standards development organisations have mature governance systems with process requirements that elicit contributions from a broad set of stakeholders, establish consensus among participants, and produce high-quality results reflecting the best available technical solutions. When a standard like ISO/IEC 42001 is developed through such processes, it carries legitimacy and practical applicability that government-unique technical requirements often lack.

A key advantage that the ICC highlights is the maintenance and evolution cycle inherent in standards bodies’ operations. Standards organisations regularly determine whether each standard should be revised, confirmed, or withdrawn — a built-in mechanism for keeping pace with technological change. This iterative approach stands in contrast to legislative requirements, which typically require lengthy amendment processes to update, and can provide more responsive governance for a technology evolving as rapidly as artificial intelligence.

Core International AI Standards: ISO/IEC 42001 and Beyond

The ICC paper identifies several foundational international standards that form the backbone of responsible AI governance. Understanding these standards is essential for any organization deploying AI systems in regulated environments or seeking to demonstrate best practices to clients, regulators, and the public.

ISO/IEC 42001:2023 establishes the requirements for an AI management system. Analogous to ISO/IEC 27001 for information security management, this standard provides a systematic framework for organizations to manage AI-related risks and opportunities. It covers organizational context, leadership commitment, planning, support resources, operational procedures, performance evaluation, and continuous improvement — a comprehensive lifecycle approach to AI governance.

ISO/IEC 23894:2023 provides guidance for AI risk management, building on the widely adopted ISO 31000:2018 risk management framework. Published in December 2023, this standard helps organizations identify, assess, and mitigate risks specific to AI systems, including risks related to bias, transparency, robustness, and security. Its alignment with ISO 31000 means organizations already practicing general risk management can extend their frameworks to cover AI-specific concerns without starting from scratch.

ISO/IEC 42005:2025 addresses AI system impact assessment, providing structured methodologies for evaluating the potential effects of AI deployment on individuals, organizations, and society. This standard is particularly relevant for high-risk AI applications in areas such as healthcare, justice, education, and financial services, where impact assessment is both an ethical imperative and, increasingly, a regulatory requirement.

The paper also emphasizes that AI systems are fundamentally IT systems that must be secured using established information security practices. Standards including ISO/IEC 27001:2022 for information security management and ISO/IEC 27701 for privacy practices apply directly to AI deployments, while emerging standards like ISO/IEC 27090 address AI-specific security threats. This layered approach — general IT security plus AI-specific governance — provides comprehensive protection without duplicating existing frameworks.

Transform dense policy papers into interactive experiences your compliance team will actually engage with

Try It Free →

The EU AI Act and Regional Standards Challenges

The European Union’s AI Act, enacted in 2024 with general-purpose AI model provisions coming into effect in August 2025, represents the world’s most comprehensive legislative framework for artificial intelligence regulation. The ICC policy paper examines this landmark legislation through the lens of standards alignment, identifying both positive contributions and concerning fragmentation risks.

On the positive side, the EU AI Act mandates risk management systems and quality assurance frameworks for high-risk AI applications — requirements that align conceptually with international standards like ISO/IEC 42001 and ISO/IEC 23894. The legislation creates clear regulatory expectations that standards can help organizations meet systematically and efficiently.

However, the paper raises significant concerns about the European Commission’s approach to standards development under the AI Act. In May 2023, the Commission tasked CEN-CENELEC — the European standardisation bodies — with developing harmonised standards specifically for the EU AI Act’s high-risk provisions. While harmonised standards provide a “presumption of conformity” under EU law (meaning compliance with the standard is sufficient to demonstrate regulatory compliance), they may diverge from existing ISO/IEC standards that address the same technical domains.

This parallel development creates a concrete fragmentation risk: organizations may need to comply with ISO/IEC standards for global operations while simultaneously meeting CEN-CENELEC harmonised standards for EU market access. The ICC’s position is unambiguous — regional standards should remain fully compatible with, and wherever possible identical to, existing international standards to prevent market fragmentation. Any divergence between European and international standards creates dual compliance burdens that particularly disadvantage smaller companies lacking the resources to navigate multiple frameworks simultaneously.

The EU’s development of a Code of Practice for general-purpose AI models, designed as an interim bridge before formal standards are completed and effective from August 2025, adds another layer to this complex landscape. While pragmatic, such interim measures risk establishing practices that become entrenched even when formal standards subsequently take a different approach.

NIST AI Risk Management Framework and US Approaches

The United States has taken a distinctly different approach to AI governance, centering its framework around the National Institute of Standards and Technology (NIST) rather than comprehensive legislation. The ICC paper highlights the NIST AI Risk Management Framework (AI RMF), published in version 1.0 in January 2023, as a significant contribution to global AI governance.

The NIST AI RMF offers several advantages that the ICC views favorably. It is publicly available at no cost, removing financial barriers to adoption. It is designed to be voluntary and technology-neutral, avoiding the prescriptive mandates that can quickly become outdated as AI capabilities evolve. And importantly, NIST has published crosswalks between the AI RMF and ISO/IEC 42001, demonstrating practical pathways for organizations to achieve compliance with both frameworks simultaneously.

However, the ICC notes that NIST documents can also contribute to the proliferation problem when they arise from Executive Orders and specific US laws. Each new presidential directive or legislative mandate may generate additional NIST publications that overlap with existing international standards, creating confusion about which frameworks should take precedence and how they relate to one another.

The contrast between the EU’s legislative approach and the US’s standards-centric approach illustrates a fundamental tension in global AI governance. Organizations operating across the Atlantic must navigate both paradigms — understanding which EU AI Act requirements apply to their products and services while also demonstrating alignment with NIST frameworks expected by US clients and regulators. International standards serve as the natural common ground between these approaches, providing technical specifications that satisfy both regulatory philosophies.

Impact on Small and Medium-Sized Enterprises

Throughout its analysis, the ICC returns repeatedly to the disproportionate impact that regulatory fragmentation has on small and medium-sized enterprises. This emphasis reflects the ICC’s membership base but also addresses a genuine structural concern: SMEs are the backbone of innovation in many AI application domains, yet they are the least equipped to navigate complex, overlapping compliance requirements.

International standards offer SMEs a particularly valuable pathway. Rather than investing in bespoke governance systems designed from scratch — an expense that can be prohibitive for smaller companies — SMEs can adopt established standards like ISO/IEC 42001 as ready-made frameworks that scale to their operations. These standards provide a common language that facilitates contracting with larger enterprises, enables participation in public procurement processes, and demonstrates responsible AI practices to clients and regulators.

The paper also identifies an awareness gap: many SMEs are not involved in or even aware of the standards development process. This means standards may not fully account for SME operational realities, and smaller companies may miss early signals about evolving compliance expectations. The ICC’s recommendation for governments to invest in training programmes and workshops specifically targets this gap, arguing that building technical expertise around AI standards is essential for inclusive economic participation in the AI era.

Clear and accessible procurement rules are another critical factor for SMEs. When governments incorporate widely recognized international standards into public procurement requirements — rather than creating bespoke technical specifications — they lower barriers to entry and create level playing fields where SME innovators can compete with larger incumbents on the basis of capability rather than compliance resources.

Help your team understand complex AI governance frameworks — interactive documents drive 4× more engagement

Get Started →

ICC’s Eight Policy Recommendations for Governments

The policy paper culminates in eight specific recommendations directed at governments worldwide. These recommendations form a cohesive framework for achieving global AI governance that promotes innovation while managing risks effectively.

1. Promote Strategic Alignment in AI Standards Development. New standards should be developed only in response to identified market needs, should command strong business support, and must not conflict or overlap with widely used existing standards. This recommendation directly addresses the proliferation problem, where standards organizations are compelled to launch new projects that duplicate existing work.

2. Ensure Domestic Business and Expert Participation. Governments should actively raise awareness of opportunities to influence standards development and encourage local experts from all domestic sectors to participate. Industry expertise is described as crucial for creating practical, implementable standards that align with technological realities.

3. Prioritize Industry-Driven and Globally Recognized Standards. Industry-led, international standards foster interoperability, accelerate innovation, and ensure that standards remain practical, adaptable, and rooted in real-world applications. This is explicitly positioned as preferable to government-mandated regional requirements.

4. Champion Multistakeholder Collaboration. AI standards should be developed through transparent, inclusive processes involving industry leaders, academia, civil society, and policymakers. This ensures standards are robust, balanced, and reflective of diverse perspectives while maintaining technical rigor.

5. Leverage Existing Standards. Regulatory initiatives should reference published standards — particularly ISO/IEC 23894, ISO/IEC 42001, and ISO/IEC 42005 — rather than duplicating their content in legislation. Allowing conformance with a standard to demonstrate regulatory compliance reduces costs and promotes consistency.

6. Use Standards in Public Sector Procurement. Governments should incorporate widely supported AI standards into procurement requirements instead of creating government-unique technical specifications. Public sector procurement can drive broader industry adoption and level the playing field for SMEs.

7. Support Participation Through Funding and Incentives. Concrete support mechanisms — including funding, tax incentives, and training resources — should facilitate company participation in standardisation efforts, particularly for SMEs that may lack resources for engagement.

8. Enhance Awareness and Education. Investment in training programmes and workshops is essential to build the technical expertise needed for effective AI standards implementation across organizations of all sizes.

Supply Chain Implications and Cross-Border AI Trade

The ICC paper pays particular attention to AI governance challenges within global supply chains, an area where fragmentation risks are especially acute. Modern AI systems are rarely developed in isolation — they incorporate components, data, and services from multiple providers across multiple jurisdictions. Each element of this supply chain may be subject to different regulatory requirements, creating compounding compliance complexity.

The paper highlights the role of structured, semantic data standards in ensuring accurate, safe, and efficient supply chain operations involving AI. The ICC’s own Digital Standards Initiative and its Key Trade Documents and Data Elements modelling work, alongside UN/CEFACT’s trade facilitation standards, provide frameworks for standardizing the data that AI systems process in trade contexts. As generative AI and AI agents are increasingly deployed in trade documentation, customs processing, and logistics optimization, the quality and consistency of underlying data standards becomes critical.

Cross-border considerations also extend to the recognition and mutual acceptance of compliance certifications. When an organization certifies its AI management system against ISO/IEC 42001 in one jurisdiction, the value of that certification depends on its recognition in other markets. The ICC’s call for mutual recognition mechanisms and streamlined standards points toward a more coordinated global architecture where compliance achieved in one jurisdiction translates meaningfully to others.

For businesses engaged in international trade and compliance, the ICC paper provides essential strategic guidance. Understanding which standards apply in which markets, how regional requirements interact with international frameworks, and where mutual recognition exists can significantly reduce compliance costs and accelerate market access.

Strategic Roadmap for Business AI Standards Adoption

Drawing together the ICC’s analysis and recommendations, organizations can construct a strategic roadmap for navigating the evolving AI governance landscape. The first priority is achieving baseline compliance through adoption of core international standards — ISO/IEC 42001 for management systems, ISO/IEC 23894 for risk management, and ISO/IEC 27001 for underlying information security. These foundational frameworks provide governance infrastructure that satisfies multiple regulatory requirements simultaneously.

The second priority is conducting thorough AI impact assessments using the methodology prescribed by ISO/IEC 42005, particularly for high-risk applications. Impact assessment is becoming a regulatory requirement across jurisdictions, and establishing a consistent, standards-based assessment process early positions organizations to meet emerging mandates without reactive compliance scrambles.

Third, organizations should actively engage with standards development processes, either directly through national standards bodies or through industry associations and trade groups. Early engagement provides strategic intelligence about evolving requirements and the opportunity to influence standards in ways that reflect operational realities. For organizations deploying AI across the EU and US markets, monitoring both CEN-CENELEC harmonised standards development and NIST framework updates is essential.

Fourth, building internal capacity through training and education ensures that AI governance is not confined to compliance departments but embedded across organizational functions. The ICC’s emphasis on awareness and education reflects the reality that effective AI governance requires understanding at all levels — from board oversight to engineering implementation to operational deployment.

The adoption of the Global Digital Compact by the UN General Assembly in September 2024, which explicitly calls on standards development organizations to collaborate on interoperable AI standards upholding safety, reliability, sustainability, and human rights, signals growing international consensus on the direction the ICC advocates. Organizations that align their governance practices with this trajectory position themselves advantageously for a future where international standards increasingly define the baseline for responsible AI.

Make AI governance documents engaging and accessible for every stakeholder in your organization

Start Now →

Frequently Asked Questions

What is the ICC’s position on AI governance and standards in 2025?

The International Chamber of Commerce advocates for internationally recognized, industry-driven standards as the primary mechanism to bridge divergent AI regulatory approaches across jurisdictions. Their July 2025 policy paper emphasizes that standards like ISO/IEC 42001 and ISO/IEC 23894 can provide consistent compliance pathways while preventing the costly regulatory fragmentation that threatens global AI innovation and trade.

How does regulatory fragmentation affect AI businesses?

Regulatory fragmentation creates duplicative compliance requirements across jurisdictions, increases operational costs, hinders cross-border AI deployment, and disproportionately burdens small and medium-sized enterprises. The ICC warns that divergent national and regional technical requirements raise the cost of doing business and risk slowing AI adoption, limiting productivity gains that could benefit all economies.

What are the key international AI standards referenced by the ICC?

The ICC highlights several foundational standards: ISO/IEC 42001:2023 for AI management systems, ISO/IEC 23894:2023 for AI risk management guidance, ISO/IEC 42005:2025 for AI system impact assessment, and ISO/IEC 27001:2022 for information security. These standards provide scalable frameworks that businesses can adopt to demonstrate responsible AI practices across multiple regulatory environments.

How do international AI standards help with EU AI Act compliance?

International standards can serve as practical tools to meet EU AI Act requirements, particularly for high-risk AI systems. However, the European Commission has separately tasked CEN-CENELEC with developing EU-specific harmonized standards, which may diverge from ISO/IEC standards. The ICC recommends that regional standards remain fully compatible with international standards to prevent market fragmentation and reduce compliance complexity.

What recommendations does the ICC make for governments regarding AI standards?

The ICC makes eight key recommendations: promote strategic alignment in standards development, ensure local business participation, prioritize industry-driven global standards, champion multistakeholder collaboration, leverage existing standards in regulation, incorporate standards into public procurement, support company participation through funding and incentives, and enhance awareness and education around AI standards implementation.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup

Our SaaS platform, AI Ready Media, transforms complex documents and information into engaging video storytelling to broaden reach and deepen engagement. We spotlight overlooked and unread important documents. All interactions seamlessly integrate with your CRM software.