IT Brief India - Technology news for CIOs & IT decision-makers
Concerned it professionals in office analyzing deepfake cyber threats

Deepfake attacks cost firms an average of USD $280,000 per incident

Fri, 10th Oct 2025

A new report has revealed that more than half of organisations experienced financial losses due to deepfake-related fraud in the past year, with an average impact of USD $280,000 per incident.

The industry report, titled Beyond Detection: The $280K Reality of Deepfake Attacks, was produced following a survey of 500 IT professionals at organisations with between 1,000 and 10,000 employees. The findings point to a growing prevalence of deepfake-enabled cyberattacks and demonstrate their financial consequences for businesses worldwide.

Financial implications

According to the report, 55% of surveyed organisations reported losing revenue directly due to deepfake or AI-generated voice fraud over the previous 12 months. Of those affected, 61% reported losses exceeding USD $100,000, and nearly a fifth-19%-lost USD $500,000 or more. Six-figure losses now appear routine in incidents involving deepfake technology.

Eyal Benishti, Chief Executive Officer of IRONSCALES, commented on the current landscape faced by IT departments and security professionals.

"As we enter the age of Phishing 3.0, the most worrying trend we see is that IT leaders claim confidence in their defenses, despite the majority of survey respondents reporting financial losses. When the stakes surpass six-figure losses, it's imperative that organizations make the necessary investments to mitigate these threats. The data clearly shows that traditional security defenses are no longer sufficient, and organizations must invest in adaptive security measures that will evolve with the threats they detect."

Incidents on the rise

The research indicates that deepfake-enabled attacks are increasingly common. 85% of respondents said they experienced at least one deepfake-related incident in the last year, a 10% increase on the previous year, with over 40% reporting three or more such incidents. Email-based deepfake attacks and static image manipulation were identified as the most common threat vectors, each affecting 59.3% of respondents.

The report also details the growing use of other attack vectors. Recorded audio and voice content manipulation rose sharply from 25% to 52%, while video-based manipulations increased from 33% to 44.7%. Real-time attack methods also saw gains: live video manipulation jumped from 30% to 41.2%, and live voice-only calls increased at the same rate. The data suggests organisations must fortify defences across a wide range of deepfake vectors to counter the threat effectively.

The training disconnect

A notable finding is the limited effectiveness of current cybersecurity training programmes in preventing financial losses from deepfake attacks. Over 88% of respondents indicated that their organisation had delivered deepfake-specific cybersecurity training, an increase compared to 68% in the previous year. Additionally, training frequency has intensified, with 44% receiving quarterly sessions and 37.8% undergoing monthly training relating to deepfake threats. Merely 11.6% of those surveyed said they had never received such training.

Despite this, 85% of organisations still experienced at least one deepfake incident in the past year, and 55% suffered revenue loss as a direct result. The report highlights a gap between investment in staff training and the real-world outcomes witnessed by organisations, noting that security awareness efforts alone are not currently enough to stem financial harm from advanced attacks.

Investment priorities

There is also evidence that investment in technology and policies to address deepfake threats has not matched the scale of the problem. While the cost of incidents continues to rise, 63% of organisations reported they have yet to allocate any expenditure specifically towards deepfake defences.

The report suggests organisations adopt a three-pronged defensive approach, incorporating training, technology, and formal policies specifically aimed at detecting and countering deepfake-related incidents. It also highlights the importance of using AI-driven technologies to meet AI-enabled threats, and stresses the need to automate incident responses in order to reduce the potential financial impact of these attacks.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X