Agentic AI

Agentic AI

The New Face of Fraud: A Recap of My Identity Week Talk on Deepfakes

Ken Huang's avatar
Ken Huang
Sep 17, 2025
∙ Paid
4
3
Share

I had the incredible opportunity to present on the impact of deepfake in agentic era to a packed audience on September 10th, 2025 at Identity Week in Washington DC, and was thrilled by the overwhelmingly positive response! I meant to share this milestone earlier, but as we all know, the conversation around deepfakes and AI authenticity is more relevant than ever. This rapidly evolving topic affects us all, and I'm grateful to have contributed to this important dialogue.

Thanks Hammad Atta, Dr. Muhammad Zeeshan Baig, Dr. Yasir Mehmood, Nadeem Shahzad, Dr. Muhammad Aziz Ul Haq, Muhammad Awais, Kamal Ahmed, Anthony Green, and Edward Lee for their contributions, peer reviews, and collaboration in the development of DIRF and co-authoring the associated research, published on arXiv: https://arxiv.org/pdf/2508.01997

Special thanks to Sahil Dhir and Kevin Yu for capturing this moment! Thanks

The threat landscape of digital identity is rapidly evolving, and at the forefront of this change are deepfakes. During my presentation at Identity Week, I delved into the escalating challenges posed by AI-generated identities and outlined a strategic framework for defense. For those who couldn't attend, I wanted to share the key takeaways.

The surge in deepfake incidents is alarming, with a 680% year-over-year increase and 92% of companies experiencing financial losses. In the first quarter of 2025 alone, losses from AI-generated executive impersonations have already surpassed $200 million globally. This underscores a critical "Defense Gap," as human detection accuracy for deepfakes remains at a mere 62%.

My presentation highlighted four primary attack vectors that businesses are currently facing:

  • Synthetic Onboarding: The use of AI-generated identities with fabricated faces to open financial accounts is growing by 300% annually.

  • Voice Cloning Attacks: Scammers are now able to impersonate executives over the phone using as little as three seconds of audio, leading to an average loss of $25 million per incident.

  • Executive Impersonation: Deepfake video and audio are being used in virtual meetings to direct employees to transfer funds to fraudulent accounts, with a frightening 62% success rate.

  • Video Injection Attacks: Live deepfakes are being deployed during video calls to bypass traditional identity verification methods.

A significant takeaway is that 85% of these successful attacks exploit organizational hierarchy by making "urgent" requests outside of normal business hours. The $25 million deepfake CFO fraud case in Hong Kong serves as a stark reminder of the critical vulnerabilities in our current systems, including an over-reliance on visual verification and a lack of multi-factor authentication for large transactions.

A Multi-Layered Defense Strategy

To counter this growing threat, I introduced a multi-layered, AI-powered detection approach that can close the security gap by achieving a 94% detection accuracy. This strategy includes:

  • Behavioral Biometrics: Tracking user patterns and communication styles to detect anomalies in real-time.

  • Voice & Visual Fingerprinting: Utilizing AI algorithms to analyze vocal characteristics and facial micro-expressions.

  • Blockchain Verification: Implementing multi-factor authentication with immutable digital signatures for high-value transactions.

The Digital Identity Rights Framework (DIRF)

A core component of my proposed solution is the Digital Identity Rights Framework (DIRF), a comprehensive security and governance model. DIRF is built on nine domains and 63 controls designed to protect individual digital identities from unauthorized use, cloning, and monetization in AI-driven systems.

Unlike existing frameworks such as GDPR and NIST AI RMF which offer partial protection, DIRF provides end-to-end traceability of digital likeness, enforces royalty and monetization rights, and detects and mitigates unauthorized clones.

A Call to Action for Businesses

The window for proactive defense is closing. The sophistication of deepfake technology is advancing faster than our ability to detect it. I outlined a 90-day implementation roadmap for businesses to protect themselves, and deepfake proof their key workflow process such as payment workflow, purchase agreement, supply chain workflow etc., which includes:

  • Days 1-30: Deploying AI detection systems and launching employee awareness training.

  • Days 31-60: Integrating advanced training modules and optimizing processes.

  • Days 61-90: Achieving full system integration and activating continuous monitoring.

For whole slide deck, paid subscibers can download it from the following link:

Keep reading with a 7-day free trial

Subscribe to Agentic AI to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 ken
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture