BIS Innovation Hub Academia Workshop 2025: AI, Privacy and the Future of Financial Infrastructure

📌 Key Takeaways

  • Unprecedented Collaboration: 61 participants from Cornell, ETH Zurich, Stanford, MIT and BIS leadership convened for the first structured academia-central bank technology workshop
  • AI Double-Edge Sword: AI agents offer massive productivity gains for financial operations but create dangerous attacker-defender asymmetry in cybersecurity
  • Privacy by Design: The BIS recommends integrating privacy-enhancing technologies at system inception, not retrofitting them into existing infrastructure
  • Governance Over Technology: Legal and regulatory alignment across jurisdictions remains the primary barrier to cross-border interoperability, not technical feasibility
  • TEEs as Pragmatic Tools: Trusted execution environments provide mature, practical security for payment settlement engines and compliance verification systems

Inside the BIS Innovation Hub’s First Academia Workshop

The Bank for International Settlements (BIS) Innovation Hub convened its inaugural academia workshop from June 10 to 13, 2025, marking a pivotal moment in the relationship between central banking institutions and the academic research community. This four-day event brought together approximately 61 participants, including 23 leading academics from world-renowned institutions such as Cornell University, ETH Zurich, University College London, Stanford University, Brown University, MIT, KU Leuven, TU Dresden, UC Berkeley, and the University of Toronto, alongside senior BIS leaders and Innovation Hub staff.

The workshop’s central objective was clear and ambitious: to align frontier academic research with the practical policy needs of central banks navigating an era of rapid technological change. Rather than focusing on a single technology or application, the event examined the convergence of multiple transformative forces — artificial intelligence, cryptographic privacy, hardware-based security, digital identity, and financial system interoperability — and explored how these technologies collectively reshape the foundations of modern monetary infrastructure.

What made this event particularly significant was its interdisciplinary scope. Participants spanned computer science, economics, law, and finance, reflecting the BIS’s recognition that the challenges facing financial systems cannot be solved through purely technical or purely regulatory approaches. As detailed in the BIS Innovation Hub’s mandate, fostering collaboration between these disciplines is essential for building financial infrastructure that serves the public interest. For organizations exploring how to transform complex institutional reports into interactive learning experiences, this workshop provides a masterclass in synthesizing multi-stakeholder perspectives.

Policy Foundations: Trust, Money and Digital Transformation

Before diving into specific technologies, the workshop established a critical policy framework drawn from Chapter 3 of the BIS Annual Report 2025. This framing centered on three fundamental principles that must guide any technological transformation of financial systems: trust, the singleness of money, and the balance between elasticity and integrity.

The concept of trust in financial systems operates on multiple levels. Users must trust that their money is safe, that transactions are final, and that the system will function reliably. Central banks have historically served as the institutional anchor for this trust, but digital transformation introduces new trust vectors — from hardware vendors providing trusted execution environments to AI models making automated decisions. The workshop explored how these new trust relationships can be designed, verified, and governed.

The singleness of money — the principle that a dollar is a dollar regardless of its form — faces new challenges in a world where value can be represented as stablecoins, tokenized wholesale central bank digital currency (CBDC), tokenized commercial bank deposits, or decentralized finance tokens. Each form carries different risk profiles, regulatory treatments, and settlement characteristics. The workshop emphasized that maintaining monetary singleness requires clear legal definitions and regulatory frameworks, not just technological compatibility.

Participants also grappled with the tension between data protection and legitimate regulatory access. Financial regulators need visibility into transactions for anti-money laundering (AML) and combating the financing of terrorism (CFT) purposes, yet privacy-enhancing technologies now make it possible to verify compliance without exposing underlying data. This tension — between privacy and oversight, between innovation and stability — threaded through every subsequent discussion at the workshop.

The policy framing session concluded with a series of open questions that would structure the remaining days: How should central banks position themselves relative to private-sector innovation? What governance structures are needed for multi-stakeholder technology platforms? How can regulatory frameworks keep pace with technology that evolves faster than legislation? These questions remain at the frontier of central banking policy worldwide, and the workshop’s discussions provided some of the most sophisticated answers currently available.

AI Agents in Finance: Productivity Gains and Security Risks

Artificial intelligence dominated much of the workshop’s discussion, with presentations and breakout sessions examining the entire spectrum from narrow task-specific agents to the conceptual possibility of artificial general intelligence. The consensus was both optimistic and cautionary: AI offers transformative productivity gains for financial institutions, but it simultaneously creates new and asymmetric security vulnerabilities.

On the productivity side, participants noted that large language models (LLMs) and AI agents can already automate tasks that previously required months of human effort. Zero-shot capabilities — where an AI can perform a task it has never been specifically trained for — compress timelines dramatically. For central banks, this means faster analysis of financial stability data, more efficient compliance monitoring, and enhanced capacity to process the vast volumes of data generated by modern financial markets.

However, the workshop identified a critical asymmetry in AI’s security implications. While defenders must secure every possible attack vector, attackers need to find only a single vulnerability. AI dramatically lowers the cost and increases the scale of potential attacks, enabling automated phishing campaigns, sophisticated disinformation operations, and AI-generated exploit code. This attacker-defender asymmetry means that the same technology providing productivity gains for legitimate institutions simultaneously empowers malicious actors.

The participants proposed several mitigation strategies. Secure-by-design engineering principles should be embedded from the earliest stages of AI system development, not bolted on after deployment. Formal verification, where mathematical proofs demonstrate that a system behaves as intended, should be applied wherever possible in high-stakes financial applications. The workshop highlighted tools like DecodingTrust and MMDT (Multidimensional Model Diagnostic Tool) as frameworks for systematically evaluating AI trustworthiness.

Reinforcement learning from human feedback (RLHF) was discussed as an alignment technique that helps AI systems behave according to human values, but participants cautioned that RLHF alone is insufficient for the high-stakes contexts typical of financial infrastructure. The workshop advocated for layered approaches combining RLHF with formal safety constraints, adversarial robustness testing, continuous monitoring, and human oversight mechanisms. This multi-layered defense strategy reflects the emerging best practices in AI governance that are reshaping how institutions deploy intelligent systems.

Transform complex financial research reports into interactive experiences your team will actually engage with

Try It Free →

Architecture and Interoperability for Cross-Border Finance

The architecture and interoperability track addressed one of the most persistent challenges in international finance: how to enable seamless value transfers across systems that were designed independently, operate under different legal frameworks, and serve different institutional mandates. Participants explored the reality that multiple system types — centralized real-time gross settlement (RTGS) systems, instant payment hubs, tokenized wholesale CBDC ledgers, and distributed ledger technology (DLT) platforms — will coexist for the foreseeable future.

Rather than pursuing a single unified platform, the workshop advocated for composable interoperability architectures. Several concrete patterns were discussed in detail. A synchronization layer would sit between disparate ledgers, coordinating state changes to ensure consistency without requiring systems to adopt a common technology stack. The burn-and-issue pattern destroys a token on one ledger and creates an equivalent on another, maintaining total supply integrity while enabling cross-system transfers.

Hash time-locked contracts (HTLCs) offer a cryptographic mechanism for conditional transfers that execute automatically when predetermined conditions are met within a specified timeframe. For situations where the participating systems have insufficient mutual trust for direct cryptographic settlement, trusted clearing facilities or intermediaries can serve as bridges, absorbing counterparty risk while facilitating transfers.

Perhaps the workshop’s most significant insight in this area was that the primary barriers to interoperability are legal, policy, and governance challenges — not technical limitations. Cross-jurisdictional differences in the legal definition of settlement finality, whether tokenized assets represent property or contractual claims, and varying AML/CFT requirements create friction that no purely technological solution can eliminate. The workshop called for coordinated regulatory efforts to harmonize these definitions, arguing that technical interoperability solutions are mature enough for deployment if the governance framework supports them.

The Agora model was cited as an example of a practical approach to burn-and-issue cross-ledger transfers, demonstrating that central bank innovation labs are already producing working prototypes. Participants emphasized that moving from prototypes to production systems requires not just scaling the technology but building the institutional agreements, legal frameworks, and operational procedures that support reliable cross-border settlement.

Privacy-Enhancing Technologies: Compliance Without Exposure

Privacy-enhancing technologies (PETs) emerged as one of the most promising yet challenging areas discussed at the workshop. The fundamental proposition is compelling: PETs can reconcile the tension between financial privacy and regulatory compliance by enabling verification without disclosure. Rather than exposing raw transaction data for AML/CFT checks, cryptographic techniques allow regulators to verify that compliance rules are satisfied without seeing the underlying sensitive information.

The workshop examined several PET categories in detail. Zero-knowledge proofs (ZKPs) allow one party to prove to another that a statement is true without revealing any information beyond the validity of the statement itself. In a financial context, this could mean proving that a transaction does not involve sanctioned entities without revealing the identities of the transacting parties. Multi-party computation (MPC) enables multiple parties to jointly compute a function over their private inputs without revealing those inputs to each other — useful for collaborative fraud detection across institutions.

Homomorphic encryption allows computations to be performed on encrypted data, producing encrypted results that, when decrypted, match the output of the same computations on plaintext data. This technology could enable cloud-based analytics on sensitive financial data without ever exposing the data to the cloud provider. Differential privacy adds carefully calibrated noise to data outputs, providing mathematical guarantees about the privacy of individual records while preserving the statistical utility of aggregate results. Federated analysis distributes computation across multiple data holders, allowing insights to be derived without centralizing sensitive information.

Despite the theoretical promise, the workshop identified significant practical challenges. Scalability and latency remain the dominant concerns for deploying PETs in high-throughput financial systems. Tokenized wholesale CBDC settlement and interbank payments require processing speeds and volumes that current PET implementations struggle to achieve. Hardware acceleration through application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs) was identified as a promising direction, potentially bringing PET performance to levels suitable for real-time financial operations.

The most emphatic recommendation from this track was “privacy by design” — the principle that PETs should be integrated into financial systems from their inception rather than retrofitted into existing architectures. Retrofitting introduces complexity, creates potential vulnerabilities at integration points, and often compromises the privacy guarantees that PETs are designed to provide. The workshop also stressed the importance of designing careful revocation and authorized de-anonymization mechanisms for lawful access situations, ensuring that privacy protections do not become absolute barriers to legitimate law enforcement needs.

Trusted Execution Environments as Security Building Blocks

Trusted execution environments (TEEs) received substantial attention as a mature, pragmatic technology for securing sensitive computations in financial infrastructure. Unlike PETs that rely on mathematical proofs and cryptographic protocols, TEEs provide hardware-level isolation through dedicated processor enclaves that protect data and code from unauthorized access — even from the operating system or cloud provider hosting the computation.

The key capabilities of TEEs include confidentiality (data within the enclave cannot be read by external processes), integrity guarantees (the enclave’s code cannot be modified without detection), and remote attestation (a third party can verify that the enclave is running the expected software on genuine hardware). These properties make TEEs particularly suitable for applications where multiple parties need to collaborate on sensitive data without trusting each other’s infrastructure.

Germany’s e-health platform was cited as a practical example of successful TEE deployment, demonstrating that the technology can scale to serve millions of users in a regulated environment. For financial applications, the workshop identified settlement engines, KYC (Know Your Customer) verification systems, and cross-institutional data analytics as prime candidates for TEE-based architectures.

However, participants also identified significant risks and limitations. TEEs rely on trusting the hardware vendor — a concern when geopolitical tensions affect supply chains and when a single vendor dominance creates systemic risk. Side-channel attacks, which extract information by observing the physical characteristics of the computation (such as timing, power consumption, or electromagnetic emissions), remain a persistent threat. Rollback attacks, where an attacker restores an enclave to a previous state, and backup and recovery complexities add operational challenges.

The workshop recommended governance measures including M-of-N control policies (requiring multiple parties to approve sensitive operations), immutable audit logs, regular hardware attestation verification, and multi-vendor redundancy to mitigate single-vendor risk. These measures reflect a mature understanding that technology deployment in financial infrastructure requires operational governance that matches the sophistication of the technology itself. Organizations transforming their security documentation into engaging formats can draw on these frameworks as authoritative reference material.

Make institutional research accessible — turn dense policy reports into interactive experiences

Get Started →

Self-Sovereign Identity and Verifiable Credentials

The identity track explored how self-sovereign identity (SSI) models and verifiable credentials can transform financial services by giving individuals control over their identity data while enabling efficient, privacy-preserving verification. In traditional identity systems, centralized authorities hold and verify identity data, creating honeypots for attackers and leaving individuals with little control over how their data is used. SSI inverts this model: individuals hold their credentials and selectively disclose them to verifiers.

The verifiable credentials framework separates three roles: issuers (who create credentials based on verified information), holders (who store and present credentials), and verifiers (who check credential validity). This separation enables new institutional arrangements where, for example, a government issues an identity credential, a bank verifies it during account opening, and the individual controls exactly which attributes are disclosed.

Central banks, the workshop concluded, should position themselves as validators and enablers rather than direct identity issuers. This means establishing governance frameworks and technical standards that credential systems must meet, rather than building and operating identity platforms directly. This position aligns with the BIS’s broader philosophy of providing public-good infrastructure while leaving implementation details to the market.

Significant concerns were raised about biometric identity anchors. Biometric data is inherently irreversible — unlike passwords, fingerprints and facial features cannot be changed if compromised. Impersonation using stolen biometric data poses permanent risks. The workshop favored cryptographic approaches to identity that do not rely on immutable biometric anchors, suggesting instead the use of revocable cryptographic keys that can be rotated if compromised. Legal recognition of verifiable credentials remains an open challenge, as most jurisdictions’ identity laws predate these technologies and do not explicitly address their unique characteristics.

DeFi Composability: Innovation Within Regulatory Bounds

Decentralized finance received nuanced treatment at the workshop, with participants acknowledging both its innovative primitives and its systemic risks. Rather than dismissing DeFi or embracing it uncritically, the discussion focused on identifying which elements of DeFi architecture are genuinely useful for regulated financial systems and which must be constrained for systemic safety.

DeFi’s contributions to financial technology include composability (the ability to combine different financial protocols like building blocks), transparency (on-chain transactions are publicly auditable), and programmable settlement (smart contracts that execute automatically when conditions are met). These features can improve efficiency, reduce settlement risk, and enable novel financial products.

However, DeFi also introduces operational, legal, and systemic risks that are incompatible with the stability requirements of regulated financial systems. Governance challenges in permissionless systems, the absence of clear legal liability frameworks, smart contract vulnerabilities, and the potential for rapid contagion across interconnected protocols all require careful regulatory attention.

The workshop called for a clear regulatory delineation between different categories of digital value: stablecoins (private-sector instruments pegged to fiat currencies), tokenized wholesale central bank money (digital representations of central bank reserves), and commercial bank money tokens (digital representations of bank deposits). Each category carries different risk profiles and should be subject to appropriate regulatory treatment, according to the BIS Annual Report 2025’s framework for maintaining monetary integrity.

Participants suggested that regulators should adopt a technology-neutral approach that focuses on the economic function and risk profile of digital instruments rather than their underlying technology. This would allow beneficial DeFi innovations to be incorporated into regulated finance while ensuring that systemic risks are appropriately managed.

Share cutting-edge financial research with stakeholders through engaging interactive documents

Start Now →

Recommendations and the Road Ahead for Central Banks

The workshop concluded with a comprehensive set of cross-cutting recommendations organized by stakeholder group. For central banks and policymakers, the message was clear: lead through governance, standards, and frameworks rather than prescribing specific technologies. Central banks should invest in building internal expertise in AI risk management, establish clear regulatory expectations for privacy-by-design implementations, and pursue international coordination on the legal definitions that underpin cross-border interoperability.

For system architects and technologists, the workshop emphasized composable architectures and interoperability playbooks that accommodate multiple ledger types with minimal reconfiguration. Investment in PET scalability through hardware acceleration and cryptographic optimization is critical for moving privacy-preserving technologies from laboratory demonstrations to production financial systems. The choice of interoperability patterns — burn-and-issue, HTLCs, trusted intermediaries — should match the trust and legal environments of participating jurisdictions.

The academic community received a call for intensified cross-disciplinary research that bridges technical cryptography and systems engineering with legal, economic, and governance analysis. Priority research topics include scalable zero-knowledge proof systems, provable AI safety techniques, distributed TEE architectures, and interoperability standards that map cleanly to legal definitions of settlement finality. The workshop also called for more public-good tooling — open-source evaluation frameworks, reproducible benchmarks, and shared testing environments that accelerate progress across the field.

For the private sector, the recommendations centered on transparency, standards adoption, and collaborative security. Companies deploying AI in financial contexts should adopt transparent evaluation standards and coordinate with banks and central banks on interoperability and compliance primitives. Hardware vendors providing TEEs should support multi-vendor attestation and governance frameworks that prevent single points of trust failure.

Looking forward, the BIS Innovation Hub signaled its intention to continue structured engagement with academia through future workshops, collaborative research projects, and possibly shared experimentation platforms. The success of this inaugural event — measured by the quality of interdisciplinary dialogue and the specificity of actionable recommendations — suggests that this model of collaboration will become an increasingly important channel for translating frontier research into practical financial infrastructure. Understanding these developments is essential for anyone following the transformation of global financial systems, and turning these insights into accessible formats through interactive document experiences helps broader audiences engage with this complex material.

Frequently Asked Questions

What is the BIS Innovation Hub Academia Workshop?

The BIS Innovation Hub Academia Workshop is a collaborative event convening academics, central bank leaders, and technology experts to explore how frontier technologies like AI, cryptography, and distributed systems can reshape financial infrastructure. The inaugural 2025 workshop brought together 61 participants from institutions including Cornell, ETH Zurich, Stanford, and MIT.

How will AI agents impact central banking and financial systems?

AI agents offer substantial productivity gains for central banking operations, from automating compliance checks to enhancing fraud detection. However, the BIS workshop highlighted critical risks including attacker-defender asymmetry, where AI lowers costs for cyber attackers while defenders must secure multiple vectors simultaneously. The report recommends secure-by-design engineering and formal verification for high-stakes financial AI deployments.

What are privacy-enhancing technologies (PETs) in finance?

Privacy-enhancing technologies in finance include zero-knowledge proofs, multi-party computation, homomorphic encryption, differential privacy, and federated analysis. These tools enable compliance checks like AML/CFT verification without exposing sensitive transaction data. The BIS report emphasizes integrating PETs at system inception rather than retrofitting them into existing infrastructure.

What role do trusted execution environments play in financial infrastructure?

Trusted execution environments (TEEs) provide hardware-level isolation for sensitive computations, offering confidentiality, integrity guarantees, and remote attestation capabilities. The BIS workshop identified TEEs as practical building blocks for payment settlement engines and KYC verification systems, noting successful implementations such as Germany’s e-health platform.

How does the BIS propose to achieve cross-border financial interoperability?

The BIS proposes multiple architectural patterns including synchronization layers between ledgers, burn-and-issue token mechanisms, hash time-locked contracts (HTLCs), and trusted clearing facilities. The workshop concluded that the primary barriers to interoperability are legal and governance alignment across jurisdictions rather than purely technical limitations.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup

Our SaaS platform, AI Ready Media, transforms complex documents and information into engaging video storytelling to broaden reach and deepen engagement. We spotlight overlooked and unread important documents. All interactions seamlessly integrate with your CRM software.