In the evolving world of financial crime, few developments have emerged as swiftly and alarmingly as the risk of deepfakes. Deepfakes are pieces of synthetic media, generally in the form of video, audio or images, that are digitally created using AI to replicate a person’s appearance, voice or even supporting identification documentation. AI allows fraudsters to produce incredibly convincing impersonations for the purpose of identity fraud, social engineering and bypassing identity verification systems. AI-driven deepfakes have ushered in a new era of fraud, making it easier than ever for bad actors to impersonate individuals and manipulate financial systems. The implications of deepfakes pose a unique threat to identity verification and fraud detection, requiring banks to modernize their control environments to keep pace.
In a speech delivered on April 17, 2025, Federal Reserve Vice Chair for Supervision Michael S. Barr highlighted the escalating threat that generative AI (Gen AI) poses to the financial sector, particularly through the proliferation of deepfakes. He noted a staggering “twentyfold increase over the last three years” in deepfake-related attacks. Gov. Barr underscored the stark juxtaposition between the low-cost, rapidly deployed synthetic media used by fraudsters and the resource-intensive, slow-to-implement controls required of financial institutions. While synthetic media can be created and circulated with minimal cost and effort, financial institutions must invest in careful review, rigorous testing and layered controls. Barr also acknowledged the challenges smaller institutions face and emphasized the need for banks to adopt scalable, thoughtful steps that can meaningfully reduce exposure to AI-driven fraud.
To address this growing risk, banks should begin by evaluating and enhancing their existing controls in a manner proportionate to their size and complexity. Scalable solutions do not necessarily require high-end technology. Training front-line staff to identify red flags of synthetic identity misuse (such as unnatural movements in video calls or inconsistencies in submitted documentation) can go a long way in mitigating risk. Adding out-of-band verification (e.g., call-back procedures) for high-risk transactions, reinforcing manual identity reviews during the onboarding of a new customer, and implementing dual-authorization for account changes can also serve as practical, low-cost defenses. Some vendors now offer affordable, modular fraud detection tools, including basic liveness detection or media forensics capabilities, which can be used to supplement traditional customer due diligence.
In addition to internal controls, a key risk area lies in the oversight of third-party relationships. As banks increasingly partner with vendors and fintechs to deliver services, it is essential to evaluate not only the vendor’s performance but also how AI is used in the services they provide. Does the vendor rely on AI models for customer verification, risk scoring or fraud detection? If so, what guardrails are in place to detect misuse, synthetic identities or deepfakes? Banks must remember that they remain ultimately responsible for the actions and outputs of their third-party vendors, even when those services are outsourced. This includes ensuring vendors operate within the bank’s risk appetite and regulatory expectations. To meet this obligation, banks should enhance their third-party risk management programs to include specific due diligence around AI model governance, data integrity and fraud control capabilities. Period reviews, contract clauses that require transparency, reporting on AI performance and fraud detection effectiveness are all steps that a bank may consider taking to ensure it maintains oversight of these third parties.
The risks highlighted by Gov. Barr certainly aren’t new to the regulatory landscape. In November 2024, FinCEN issued an alert (FIN-2024-ALERT004) that serves to help financial institutions identify fraud schemes associated with the use of deepfake media and generative AI in fraud. The alert is part of the U.S. Department of the Treasury’s initiative to address the challenges posed by AI in the financial sector. It offers foundational awareness of the threat of deepfakes. Additionally, the alert serves as guidance for banks to review and update their risk-based procedures to address the specific challenges posed by deepfakes. The alert also provides specific red flags to help institutions identify potential deepfakes, including but not limited to anomalies in submitted images or videos, discrepancies between known customer data and new applications and unusual transaction behavior following new account openings. Further provided is SAR filing guidance, directing institutions to use the key term “FIN-2024-DEEPFAKEFRAUD” when reporting suspected activity. Banks should incorporate these indicators into their fraud programs and consider whether their current systems are sufficient to capture synthetic identity activity in a timely manner.
As banks increasingly rely on AI to combat fraud, it is crucial to also recognize and manage the new risks associated with Gen AI. A robust strategy involves more than just implementing protective technologies; it requires a shift in culture and operations to effectively handle the rising sophistication of synthetic identities, the potential misuse of deepfakes to circumvent security measures, and the vulnerabilities that may arise from third-party vendors utilizing AI tools. Establishing strong AI governance, designing scalable controls and ensuring proper oversight of third-party partners are essential steps in mitigating these threats. Although the danger posed by deepfakes is significant and escalating, with careful planning and adaptation, even smaller community banks can substantially lower their risk and bolster their resilience in this evolving AI-driven landscape.