News and Knowledge Portal for Identity Verification Professionals

collapse
...
Home / Fraud / FinCEN Warns of Criminal Use of Deepfake Technology to Circumvent Controls - Lexology
FinCEN Warns of Criminal Use of Deepfake Technology to Circumvent Controls - Lexology

FinCEN Warns of Criminal Use of Deepfake Technology to Circumvent Controls - Lexology

2024-11-19  Per Henrikson

A recent FinCEN alert highlights an increase in reports of deepfake identity fraud and describes ways financial institutions can reduce risk and detect the illicit use of AI tools. Generative artificial intelligence (AI) holds tremendous promise for financial institutions and their customers. But that promise comes with potential peril, as highlighted in a recent alert issued by the U.S. Treasury Department's Financial Crimes Enforcement Network (FinCEN) regarding the misuse of AI by fraudsters. Section 6206 of the Anti-Money Laundering Act of 2020 requires FinCEN to periodically publish threat pattern and trend information derived from Bank Secrecy Act (BSA) filings. Last week, FinCEN warned financial institutions about the growing use of deepfake media by criminals targeting financial institutions and their customers. Deepfake media, or "deepfakes," is a type of synthetic media, like videos, pictures, audio, or text, that use AI to fabricate or manipulate content in a highly realistic but inauthentic way. Fraudsters are increasingly leveraging these AI-generated deepfakes to bypass customer identification and verification and customer due diligence controls at financial institutions, and perpetrate fraud schemes and other financial crimes.


Share: