Generative AI deepfakes have the potential to both amaze and terrify us. On one hand, they can create stunningly realistic images and videos. On the other hand, they can be used for malicious purposes such as spreading misinformation or committing fraud. According to a recent research report from Cato Networks’ CTRL Threat Research, deepfakes can even be used to bypass two-factor authentication.
AI Generates Videos of Fake People Looking into a Camera
In the report, a threat actor known as ProKYC was identified as using deepfakes to create fake government IDs and trick facial recognition systems. ProKYC then sells this tool on the dark web to individuals looking to infiltrate cryptocurrency exchanges by creating fake accounts.
With generative AI, the attacker can easily generate a realistic image of a person’s face, which is then placed on a fake driver’s license or passport. This fake identity is used to pass live video verification checks required by some crypto exchanges. The deepfake tool can create an AI-generated video of a person looking around, fooling the facial recognition system.
This type of attack, known as New Account Fraud, has led to billions of dollars in losses. The attacker gains access to the exchange using the generated identity and can use it for money laundering or other fraudulent activities.
How to Prevent New Account Fraud
Cato Research’s Chief Security Strategist, Etay Maor, suggests several measures organizations can take to prevent the creation of fake accounts using AI:
- Scan for common traits of AI-generated videos, such as very high quality and clarity.
- Look for glitches in AI-generated videos, especially around the eyes and lips.
- Collect threat intelligence data across the organization.
Finding the right balance between security and convenience is crucial. Strict biometric authentication systems may result in false positives, while lax controls can lead to fraud. It’s essential to constantly reassess and update security measures to stay ahead of evolving threats.