Your next phone call from your boss might not be from your boss.

In 2025, deepfake-related fraud losses reached roughly $1.1 billion globally according to Surfshark Research (Jan 2026) — up from an estimated $360 million in 2024. Deloitte's Center for Financial Services projects that AI-enabled fraud could approach $40 billion by 2027.

A finance employee at Arup was tricked into transferring $25 million after a video call with what appeared to be their CFO and several colleagues. Every person on the call was AI-generated. Every one.

"Deepfake-as-a-Service" is now a product you can buy.

$1.1B
Deepfake fraud losses, 2025
3x
Year-over-year increase (Surfshark)
$40B
Projected AI fraud by 2027

This Isn't an Abstract Threat

For defense professionals, consider this scenario:

A program manager gets a call from their DCMA counterpart asking for a contract modification. The voice is perfect. The context is accurate. The request seems urgent but reasonable.

Except it's not them.

This isn't hypothetical. The FBI and DOJ issued multiple warnings in 2025 about North Korean operatives using deepfake technology and AI-manipulated identities to infiltrate US companies — including potentially defense contractors — by posing as remote IT workers. Experian flagged "deepfake candidates" infiltrating remote workforces as the #2 fraud threat for 2026.

According to Pindrop's dataset, AI-driven fraud attacks against major US financial institutions rose 1,210% in 2025 alone — though this figure reflects Pindrop's specific monitoring scope, not the full industry. The technology is cheap, improving exponentially, and requires only seconds of sample audio to clone a voice.

The Verification Problem

The old verification method — "I'll call you back at your office number" — is dying. Caller ID is spoofable. Office numbers can be forwarded. Even video calls aren't proof anymore, as the Arup incident demonstrated.

What replaces it?

  • Challenge-response protocols. Pre-arranged verbal passwords that change on a schedule.
  • Out-of-band verification. Confirm a phone request via encrypted email. Confirm an email request via phone. Never verify through the same channel.
  • Rotating authentication codes. The same kind of out-of-band verification we use for crypto key management — applied to human communication.
  • Behavioral baselines. Does this request match the established pattern? Is the urgency unusual? Is the authority appropriate?

This sounds paranoid. It won't sound paranoid in 2028.

The Defense Implications

The organizations that build human authentication protocols NOW — before the first major defense deepfake incident — will be the ones that don't end up in a GAO report.

For those of us in defense program management: every process you have that starts with "when you receive a call from..." needs a second sentence. And that sentence needs to be: "...and here's how you verify it's actually them."

The next generation of social engineering attacks won't come through email. They'll come through a phone call that sounds exactly like your commanding officer. And they'll ask for something just reasonable enough that you won't think to question it.

We train people to spot phishing emails. We need to start training them to spot phishing voices. And the window to get ahead of this is closing faster than most defense leaders realize. The Pentagon just gave 3 million people access to AI tools — but the verification protocols haven't caught up to the threats those same tools enable.


By 2028, every major defense contractor will have a deepfake voice incident. The ones that survive it cleanly will be the ones that built verification protocols before they needed them. The rest will be case studies.

Sources: Surfshark Research (Jan 2026), Deloitte Center for Financial Services (May 2024), Pindrop AI Fraud Report (Feb 2026), CNN reporting on Arup incident (Feb 2024), FBI IC3 Public Service Announcements (2025)