top of page

The Imitation Game: The Rising Threat of AI-Powered Fraud and Financial Crime

Recent developments in Artificial Intelligence (AI) have already improved efficiency and personal convenience, but also cast a giant shadow over financial services, as AI-powered fraud and financial crime has become a rising threat worldwide. AI innovation is truly a mixed blessing that empowers cyber-criminals with sophisticated tools and techniques, and banks need to reimagine their strategies to cope with it effectively.


AI-powered fraud and financial crime refers to the use of Generative AI technologies to create tools for fraudulent or criminal activities within online financial services. This rising threat can take various forms, such as bank scams that trick victims, deep fakes to impersonate individuals or bypass identity verification systems, and automated fraud tools. These activities leverage AI capabilities as a force multiplier, making them more challenging to detect/prevent using traditional security measures.


Deep Fakes

Deep fake technology poses a significant threat in fraud and money laundering, enabling cyber-criminals to exploit vulnerabilities and bypass Know Your Customer (KYC) systems. Here are a few fraud MOs (Modus Operandi) and money laundering techniques using deep fakes: 

 

  • Identity Theft and Impersonation: Deep fakes allow fraudsters to create highly realistic audio and video impersonations of individuals. By generating voices and facial expressions, criminals can impersonate key figures within financial institutions or high-profile individuals to throw-off verification processes.

  • Fake Documents: Deep fakes can be applied to forge identification documents or doctor video evidence in KYC processes. Fraudsters can use GenAI to create fake documents that pass the initial check, allowing them to open fraudulent accounts or conduct transactions without being flagged.

  • Bypass KYC and Liveness Tests: Many KYC systems employ liveness tests for identity verification. Deep fakes can potentially bypass these systems by presenting a realistic synthetic face during authentication, leading to unauthorized access and fraudulent transactions.

  • Money Laundering through Synthetic Identities: Deep fakes can be used to create synthetic identities by combining real and fake information. Fraudsters may use these synthetic identities to open accounts, make payments, and launder money while evading traditional KYC checks.


Visual 1. A fraudster searching for image injection software in a dark web fraud forum. Such software is used to bypass KYC verification processes that utilize liveness checks. Source: Daniel Shkedi/Refine Intelligence.


Visual 2. A fraudster asking for advice on how to overcome verification difficulties while using deep fakes. Source: Daniel Shkedi/Refine Intelligence.


Automated Attack Tools

AI enables cyber-criminals to develop tools capable of launching highly sophisticated attacks and bypassing verification systems. AI technology can generate malicious code that can automate attacks like credential stuffing, card-testing, distributed denial of service and brute force attacks. In addition, these tools can automate various stages of fraud and money laundering schemes, such as the systematic creation of new accounts through automated scripts.



Visual 3. A fraudster explaining how to prompt ChatGPT to generate malicious code for DDoS attacks in a dark web forum. Translation from Russian: “Let's make ChatGPT write malicious code. In short, you need to ask the robot questions about malware already known on the Internet, and then, based on their passages, you can create your own.” Source: Daniel Shkedi/Refine Intelligence.


Bank Scams

By leveraging AI, fraudsters can generate highly convincing and personalized phishing emails, help desk scam SMS messages, ‘CEO fraud’ emails, and detailed scripts for phone scams. AI enables fraudsters to analyze social media posts, emails, and public records to craft tailored messages that manipulate victims into disclosing confidential information or performing actions that benefit them. 


Visual 4. A fake ‘CEO Fraud’ email generated by ChatGPT. Fake emails like this are used to trick victims in phishing/social engineering schemes at an alarming rate. Source: Daniel Shkedi/Refine Intelligence. 


Going Forward

In the coming 12 months, the landscape of AI-powered fraud and financial crime will most likely continue to evolve and pose new challenges for individuals, businesses, and governments. Dealing with this alarming threat requires a 360-degree approach that combines technological innovation, regulatory oversight, and collaboration across industries. As technology continues to advance, it is crucial for financial institutions to invest more in robust AI-driven detection systems capable of identifying/mitigating emerging threats in real-time. In addition, regulatory bodies must adapt and formulate policies that address the evolving nature of financial crimes powered by AI, while promoting cooperation to fight illegal activities. Beyond technology and regulation, raising awareness among users and fraud/AML teams is extremely important. By formulating a proactive strategy, decision-makers and investigators alike will be able to mitigate some of the many risks associated with AI-powered fraud and financial crime and stay ahead in this cat and mouse game.








bottom of page