AI and Cybersecurity in Banking | Risks & Opportunities
Table of Contents
- AI and Cybersecurity in Banking — The Dual-Use Challenge
- AI Adoption in Banking — Current State and Investment Trends
- Machine Learning Applications for Banking Security
- The Cybersecurity Threat Landscape for Banking AI
- Adversarial Attacks on Banking Machine Learning Models
- Data Poisoning and Fraud Detection Vulnerabilities
- Malicious AI Tools Targeting Financial Institutions
- Defensive Strategies for Banking AI Systems
- Building Secure, Trustworthy, and Resilient ML Models
📌 Key Takeaways
- 50% AI Adoption in Banking: Half of banking sector respondents report having already implemented AI, with 89% planning significant cybersecurity investment increases and 90% prioritizing generative AI investment for 2025.
- AI Is a Dual-Use Technology: Machine learning simultaneously strengthens banking defenses through fraud detection and intrusion prevention while enabling more sophisticated cyber attacks through tools like WormGPT and FraudGPT.
- Four Critical Threat Vectors: Banking ML models face data extraction, data poisoning, model extraction, and evasion attacks — with documented real-world cases of fraud detection systems being compromised through training data manipulation.
- Mobile Financial Threats Doubled: Kaspersky data reveals the number of users affected by mobile financial threats doubled in 2024 compared to 2023, with the upward trend expected to persist through 2025.
- Four Essential ML Characteristics: Banking AI systems must be built with security, trust, robustness, and resilience to withstand adversarial attacks while maintaining reliable performance in critical financial operations.
AI and Cybersecurity in Banking — The Dual-Use Challenge
Artificial intelligence is reshaping the banking industry with unprecedented opportunities for innovation and efficiency, but this transformation carries a fundamental paradox that every financial institution must confront. According to research by Kovačević, Radenković, and Nikolić from the University of Belgrade, AI is inherently a dual-use technology in banking — it can strengthen cybersecurity defenses through advanced pattern recognition and automated threat response, while simultaneously enabling more sophisticated and scalable cyber attacks by malicious actors.
The stakes are exceptionally high in banking because monetary gain remains the primary motivation for cyber attacks against the financial sector. Kaspersky data from 2024 reveals a significant global rise in mobile financial threats, with the number of affected users doubling compared to 2023, and this upward trend is expected to persist into 2025. This creates an arms race between defensive AI applications and offensive AI capabilities, where the speed and scale at which machine learning operates amplifies both benefits and potential risks beyond human capacity.
The research establishes that while machine learning is not inherently more dangerous than human action, its computational speed and operational scale create unique challenges for the banking sector. ML models can process and analyze data volumes that would take human analysts months or years to review, making them invaluable for fraud detection and threat identification. However, this same capability means that when these systems are compromised — through adversarial attacks, data poisoning, or model extraction — the consequences cascade at machine speed across entire financial ecosystems. For context on how regulators are responding to these challenges, see our analysis of SEC FY2026 cybersecurity examination priorities.
AI Adoption in Banking — Current State and Investment Trends
The banking sector’s AI adoption has reached a critical inflection point. According to Gartner’s 2024 survey data, 50% of respondents in the banking sector report having already implemented AI, while generative AI adoption stands at 40% and growing rapidly. By 2027, over 50% of enterprises are projected to employ industry-specific generative AI models, representing a dramatic increase from just 1% in 2023 — a fifty-fold growth trajectory that underscores the transformational velocity of AI integration in financial services.
Investment priorities for 2025 versus 2024 reveal a striking pattern across the banking industry. Generative AI leads with 90% of organizations planning significant investment increases, followed closely by cybersecurity at 89% and broader AI applications at 85%. This parallel investment in both AI capability and cybersecurity defense reflects the industry’s recognition that advancing one without the other creates dangerous asymmetries — deploying powerful AI tools without corresponding security investments exposes institutions to the very threats those tools could help prevent.
The competitive environment in banking is progressively influenced by digital-first entities — fintech companies and digital banks — rather than conventional institutions with significant branch networks. Four key digital disruptors drive AI integration: advanced data analytics, robotic process automation, embedded banking services, and intelligent infrastructure. Specific applications range from AI-driven chatbots for customer service and robo-advisors for investment guidance to predictive analytics, automated credit assessment, and enhanced fraud detection systems.
The research identifies three sequential tiers of AI implementation in digital banking. The first tier utilizes machine learning to discern patterns and enhance digital banking operations. The second tier achieves general intelligence that can replicate human interactions so effectively that customers and employees may be unaware they are engaging with an AI system. The third and most sophisticated tier envisions AI systems that exceed human banking personnel in analytical capability and decision-making precision. Each tier introduces progressively greater cybersecurity implications.
Machine Learning Applications for Banking Security
Machine learning’s cybersecurity capabilities in banking span seven critical functions: analyzing large volumes of transactional data, detecting previously unknown patterns indicative of fraud or intrusion, identifying and preventing system intrusions in real time, detecting malicious code within banking infrastructure, investigating attacks on targeted systems, analyzing entry possibilities and uncovering vulnerabilities, and detecting unusual activities indicating cyber attacks or system misuse.
Intrusion Detection Systems powered by machine learning demonstrate distinct strengths and limitations depending on the ML methodology employed. Supervised learning effectively identifies well-documented attack patterns with low false positive rates but struggles with unknown or zero-day threats. Unsupervised learning offers the potential for detecting zero-day attacks that have never been seen before but generates a higher rate of false positives. Reinforcement learning adapts dynamically to evolving cyber threats and provides robust defense capabilities but requires sufficient training time to develop effective defensive responses.
Software vulnerability detection represents a particularly valuable application for banking cybersecurity. Modern banking codebases can contain millions of lines of code, making manual security analysis both time-consuming and prone to human error. Machine learning models trained to recognize vulnerability patterns enable effective automation with faster and more accurate detection than traditional code review processes. Zero-day vulnerabilities — previously unknown flaws exploited by sophisticated attackers — are increasingly central to advanced cyber operations, and leveraging ML to detect these new vulnerabilities could significantly enhance banking cybersecurity defenses. The National Institute of Standards and Technology (NIST) Cybersecurity Framework provides foundational guidance that complements ML-based security approaches.
Transform complex cybersecurity research into interactive experiences your banking teams will actually engage with and remember.
The Cybersecurity Threat Landscape for Banking AI
Security remains a top priority for corporate boards when considering AI adoption in banking, and for good reason. AI failures or misuses in sensitive financial systems can produce severe consequences ranging from unauthorized transaction approvals to mass data breaches exposing millions of customer records. Cyber attacks against banking institutions are growing in both frequency and sophistication, driving significant costs for defensive measures, incident response, and regulatory compliance. The cybersecurity workforce shortage identified by Oracle and KPMG research compounds this challenge, creating a growing need to automate threat detection and response processes.
The research demonstrates AI’s dual-use nature through a documented experiment by Hazell in 2023, which showed how GPT-3 and GPT-4 could automate sophisticated spear-phishing campaigns through a three-step process. First, the AI gathers personalized biographical data from online sources about targeted banking employees. Second, it crafts customized email messages based on this personal data to maximize the probability of engagement. Third, it embeds malware within those carefully crafted emails. This automated pipeline transforms spear-phishing from a labor-intensive manual process into a scalable operation.
The emergence of purpose-built malicious AI tools poses an especially concerning threat to the banking sector. WormGPT, built on the open-source GPT-J framework, is specifically trained on malware and phishing email data. Unlike ethical AI models that include safeguards against harmful content generation, WormGPT operates without restrictions, enabling efficient generation of malicious content at industrial scale. FraudGPT takes this further by being specifically designed for financial fraud activities. Additional tools including jailbroken ChatGPT variants with DAN prompts and FreedomGPT further expand the adversarial AI toolkit available to cybercriminals targeting financial institutions. The European Union Agency for Cybersecurity (ENISA) tracks the evolving threat landscape facing financial services institutions across Europe.
Adversarial Attacks on Banking Machine Learning Models
The research identifies four fundamental types of threat vectors targeting machine learning models deployed in banking. Data extraction attacks involve adversaries attempting to uncover the data on which the model was trained — particularly dangerous in banking where models are trained on sensitive customer financial information, transaction histories, and personally identifiable data. Model extraction attacks enable adversaries to obtain information about the model’s internal structure, architecture, and decision boundaries.
Evasion attacks force ML models to make incorrect predictions while allowing malicious activities to avoid detection. The attacker generates adversarial examples — inputs that are minimally modified in ways imperceptible to humans but sufficient to cause the model to misclassify the data. Research by Goodfellow et al. demonstrates that even small changes in just a few pixels of input data can significantly alter classification results, even when a human cannot perceive any difference. In banking applications, this means fraudulent transactions could be modified just enough to bypass ML-based detection systems while appearing completely normal to human reviewers.
The three categories of attacker access levels create distinct risk profiles for banking AI systems. Black box attacks operate without insight into the model’s internal details, relying instead on analyzing input-output relationships to identify exploitable vulnerabilities. White box attacks involve complete knowledge of the model’s architecture, enabling attackers to create precise perturbations that exploit specific weaknesses. Gray box attacks fall between these extremes, with partial access to model information. Banking institutions must defend against all three scenarios simultaneously, as different attackers — from external cybercriminals to insider threats — operate at different access levels.
Data Poisoning and Fraud Detection Vulnerabilities
Data poisoning represents one of the most insidious threats to banking AI systems because it corrupts the learning process itself rather than attacking the deployed model. Attackers manipulate training data by introducing false examples designed to make the model produce systematically inaccurate predictions. Due to the inherent nature of machine learning models, minimal changes in input data that are imperceptible to human auditors can lead to fundamental misclassification of financial transactions.
The research documents a real-world case of data poisoning in a financial institution where attackers successfully manipulated the training data used by a fraud detection system. The result was devastating — the compromised system began approving fraudulent transactions as legitimate, effectively neutralizing the institution’s primary defense against financial fraud while the system’s operators believed it was functioning normally. This attack is particularly dangerous because it corrupts the model’s fundamental understanding of what constitutes legitimate versus fraudulent activity.
A related threat scenario described in the research involves ML models designed to detect security incidents such as unauthorized access or data breaches. An attacker could insert manipulated data into the training set that alters the model’s ability to detect specific types of security incidents accurately. This could allow unauthorized individuals or malicious activities to go entirely unnoticed by automated security monitoring systems, creating persistent access for attackers who effectively teach the AI to ignore their presence. Model inversion attacks compound this risk — if a model trained on confidential banking data is compromised, model inversion techniques can reveal key characteristics of the underlying customer data used for training. For a broader view of how AI intersects with financial regulation, explore our analysis of AI in ESG for financial institutions.
Help your cybersecurity teams stay current with evolving AI threats through interactive, AI-powered training experiences.
Malicious AI Tools Targeting Financial Institutions
The proliferation of purpose-built malicious AI tools represents a qualitative shift in the cybersecurity threat landscape facing banking institutions. WormGPT stands as the most documented example — built on the open-source GPT-J framework and specifically trained on malware datasets and phishing email examples. Unlike commercial AI systems like ChatGPT that include ethical guardrails and content filters, WormGPT operates without any safeguards, making it capable of generating sophisticated malicious content including convincing phishing emails, malware code, and social engineering scripts tailored to banking contexts.
FraudGPT takes specialization further by being purpose-designed for financial fraud activities. While specific technical details are closely guarded in cybersecurity research, the tool reportedly enables criminals with limited technical expertise to generate convincing banking fraud schemes, craft credential-harvesting pages that mimic legitimate banking portals, and develop social engineering campaigns targeting banking customers and employees. Additional tools in this ecosystem include jailbroken ChatGPT variants using DAN (Do Anything Now) prompts, AutoGPT configurations designed for harmful content generation, and FreedomGPT which operates without content restrictions.
These tools fundamentally democratize cybercrime targeting the financial sector. Previously, sophisticated phishing campaigns and malware development required significant technical expertise. Now, malicious AI tools lower the barrier to entry, enabling less skilled attackers to execute campaigns that previously required specialized knowledge. The implications for banking are profound — the volume of sophisticated attacks is likely to increase dramatically as these tools become more accessible and refined. The Bank for International Settlements has published guidance on how financial institutions should respond to AI-driven cybersecurity threats.
Defensive Strategies for Banking AI Systems
The research outlines a comprehensive defensive framework for protecting banking AI systems against adversarial attacks. The foundation rests on four strategic pillars: understanding and simulating different types of attacks to identify vulnerabilities before adversaries exploit them, detecting adversarial attacks in real time through monitoring and anomaly detection, training robust models that maintain performance under adversarial conditions, and understanding the specific weaknesses and vulnerabilities of deployed systems through continuous assessment.
Adversarial training represents the most established defensive technique, involving the deliberate addition of adversarial examples to training datasets. By exposing models to known attack patterns during training, the resulting systems develop greater resilience to adversarial perturbations in production environments. Brute-force adversarial training generates large volumes of adversarial examples across multiple attack vectors, while more targeted approaches focus on the specific attack types most relevant to banking applications — data poisoning of fraud detection systems, evasion attacks on transaction monitoring, and model extraction attempts on proprietary scoring algorithms.
Data randomization and gradient masking provide complementary defensive layers. Data randomization introduces controlled noise into training data that reduces the predictability attackers need to craft effective adversarial examples. Gradient masking obscures the internal gradient information that white-box attackers use to identify optimal perturbations, making it significantly more difficult to reverse-engineer model behavior. Defensive strategies must be implemented during both training and testing phases of ML model development, as some attack vectors target the learning process itself while others exploit the deployed model’s inference behavior. For insights into how technology transforms compliance, explore our coverage of digital transformation in financial compliance.
Building Secure, Trustworthy, and Resilient ML Models
The research establishes four essential characteristics that every machine learning model deployed in banking must possess. Security ensures protection against unauthorized access and manipulation of both the model and its training data. Trust establishes confidence in the model’s outputs and behavioral consistency, particularly critical for applications like credit scoring and fraud detection where incorrect outputs have direct financial consequences. Robustness ensures the model maintains reliable performance even during crisis conditions — whether facing adversarial attacks, unusual market conditions, or degraded data quality. Resilience ensures the system can return to normal functioning within a reasonable timeframe after a disruption.
The labor transformation driven by AI adoption in banking creates both opportunities and challenges for cybersecurity. Conventional banking roles are anticipated to diminish while emerging positions become increasingly crucial — data scientists who understand ML vulnerabilities, behavioral psychologists who can anticipate social engineering patterns, and experience designers who can build secure user interfaces. This workforce evolution means that cybersecurity in the AI era requires not just technical skills but interdisciplinary expertise that bridges machine learning, behavioral science, and financial services domain knowledge.
The research identifies a significant and concerning gap in understanding the full extent of attack success rates against banking ML systems. Addressing this gap requires sustained investment in vulnerability assessments, attack simulations, and robust model training — not as one-time compliance exercises but as continuous processes that evolve alongside the threat landscape. Banking institutions that treat AI security as a periodic audit rather than an ongoing operational priority leave themselves exposed to adversaries who continuously probe for weaknesses and adapt their techniques. The imperative is clear: thorough assessment of machine learning technologies within sensitive banking systems, recognizing that they carry distinctive risks alongside their transformative advantages, is not optional but essential for financial system stability. The Financial Stability Board continues to monitor AI-related risks to global financial stability.
Transform cybersecurity research and banking AI risk assessments into interactive experiences that drive understanding across your organization.
Frequently Asked Questions
How is AI used for cybersecurity in banking?
AI is used for cybersecurity in banking through machine learning-powered fraud detection, intrusion detection systems using supervised, unsupervised, and reinforcement learning, zero-day vulnerability detection, software vulnerability identification in complex codebases, anomaly detection for unusual transaction patterns, and automated threat response to address the cybersecurity workforce shortage. Fifty percent of banking institutions have already implemented AI, with 89% planning significant cybersecurity investment increases.
What are adversarial attacks on banking AI systems?
Adversarial attacks on banking AI systems include four main threat vectors: data extraction where attackers uncover training data, data poisoning where attackers manipulate training data to produce inaccurate predictions, model extraction where attackers obtain information about the model’s internal structure, and evasion attacks where attackers force incorrect predictions to avoid detection. These attacks can be executed via black box, white box, or gray box access levels.
What is data poisoning in banking fraud detection?
Data poisoning in banking fraud detection occurs when attackers manipulate the training data used by machine learning models, introducing false examples that cause the system to approve fraudulent transactions as legitimate. A documented real-world case describes attackers successfully poisoning a financial institution’s fraud detection training data, allowing unauthorized transactions to pass undetected. Even minimal input changes imperceptible to humans can lead to misclassification.
What are WormGPT and FraudGPT?
WormGPT and FraudGPT are malicious AI tools designed specifically for cybercrime. WormGPT is built on the open-source GPT-J framework and trained on malware and phishing email data, operating without ethical safeguards. FraudGPT is specifically designed for fraudulent activities. Both tools enable cybercriminals to generate sophisticated phishing campaigns and malicious content at scale, representing the weaponization of AI technology against financial institutions.
What defensive strategies protect banking AI from attacks?
Defensive strategies for banking AI include adversarial training by adding adversarial examples to training datasets, data randomization to reduce predictability, gradient masking to obscure model internals, simulating different attack types to understand vulnerabilities, training robust models designed with security, trust, resilience, and robustness characteristics, and implementing defenses during both training and testing phases of ML model development.