Critical Threat Intelligence & Advisory Summaries

AI-generated deepfake candidate in a remote job interview

One in Four Job Applicants Could Be Fake by 2028, Experts Warn

 

LONDON — The rise of AI-generated job candidates and deepfake employees is creating a global security threat, cybersecurity experts warn. A 2024 incident in which a North Korean operative infiltrated U.S. cybersecurity firm KnowBe4 illustrates the risk: the individual passed interviews, background checks, and reference verification before being detected 25 minutes after receiving company equipment.

 

The warning comes as organizations report a rise in deepfake-related fraud incidents in late 2025 and early 2026.

 

Key Takeaways

- AI-generated job candidates and synthetic employees are increasing globally.

- One in four applicants could be fake by 2028.

- Deepfake fraud is expanding from hiring into financial and insider threats.

- Security experts say identity verification must be continuous, not one-time.

 

For Security Leaders

For security leaders, the shift is significant: hiring is becoming a privileged access event. A synthetic employee with valid credentials, approved onboarding, and internal system access represents the same risk profile as a compromised administrator account. Unlike traditional external attacks, these threats enter through trusted processes and may operate undetected for months.

 

A Workforce Threat Moving Into the Mainstream

Law enforcement and industry reports indicate the threat extends far beyond a single incident. Between 2020 and 2022, U.S. authorities identified hundreds of companies that unknowingly hired North Korean IT workers. Investigations suggest many operatives funneled earnings to state programs. Threat intelligence firms, including Mandiant, report similar cases in Fortune 500 firms worldwide.

The schemes often rely on stolen or synthetic identities, fabricated employment histories, and coordinated infrastructure designed to make remote workers appear domestically based.

 

Generative AI Lowers the Barrier

Advances in generative AI are accelerating the problem. Research from Palo Alto Networks shows a convincing fake candidate, including photos, documentation, and online presence can be assembled in about an hour. Voice cloning technology can reproduce close matches from seconds of audio, according to Pindrop.

Human detection remains limited: studies show people correctly identify high-quality deepfake video only 25% of the time. Gartner projects that by 2028, one in four global job applicants could be entirely synthetic.

 

From Paychecks to Access and Extortion

Security experts say the risk now extends beyond salary fraud. Google Threat Intelligence Group observed cases in which fraudulent employees attempted data theft or extortion after gaining internal access. Other incidents involve ransomware or unauthorized system use once initial trust is established.

Deepfake impersonation also affects financial controls: in 2024, an Arup employee transferred $25 million after participating in a call featuring AI-generated likenesses of senior executives. Industry data indicates businesses now lose hundreds of thousands per deepfake incident on average.

 

Why Traditional Verification Is Failing

KnowBe4’s experience highlights a structural issue: all standard hiring checks passed, but detection occurred only through technical monitoring, restricted access, and behavioral anomaly analysis.

 

“When visual and audio evidence can be generated synthetically, identity verification cannot rely on what humans see or hear,” cybersecurity analysts note.

 

Legacy verification processes, video interviews, references and background checks are no longer sufficient.

 

The Emerging Risk of Autonomous “Agentic” Threats

Experts warn these incidents may be early signs of autonomous AI agents: persistent digital workers capable of interviewing, onboarding, and performing tasks while maintaining synthetic identities.

 

“This is not just a hiring problem, it’s a new security perimeter,” cybersecurity analysts warn. “A synthetic employee entering a corporate network has the same potential impact as a compromised administrator. Unlike phishing or ransomware, these threats come through trusted onboarding processes and can persist undetected for months. With generative AI now capable of maintaining multiple fake identities simultaneously, organizations that do not treat every new hire as a potential attack vector are effectively leaving their doors open.”

 

Controls That Reduce Risk

Security researchers have identified measures that consistently limit impact:

 

- Out-of-band verification via trusted channels

- Independent dual approval for sensitive actions

- Transaction limits and mandatory delays

- Strict no-exception verification policies

- Action-level multi-factor authentication

- Restricted access and monitoring for new employees

 

Despite these safeguards, surveys show many HR and hiring teams have no training on AI-enabled fraud, relying mainly on manual reviews and traditional background checks.

 

 


About This Article

Last Updated: same as published
Reading Time: Approximately 15 minutes

 

Author Information

Timur Mehmet | Founder & Lead Editor

Timur is a veteran Information Security professional with a career spanning over three decades. Since the 1990s, he has led security initiatives across high-stakes sectors, including Finance, Telecommunications, Media, and Energy.

 

For more information including independent citations and credentials, visit our About page.

 

Contact: This email address is being protected from spambots. You need JavaScript enabled to view it.

 

Editorial Standards

This article adheres to Hackerstorm.com's commitment to accuracy, independence, and transparency:

  • Fact-Checking: All statistics and claims are verified against primary sources and authoritative reports
  • Source Transparency: Original research sources and citations are provided in the References section below
  • No Conflicts of Interest: This analysis is independent and not sponsored by any vendor or organization
  • Corrections Policy: We correct errors promptly and transparently. Report inaccuracies to This email address is being protected from spambots. You need JavaScript enabled to view it.

Editorial Policy: Ethics, Non-Bias, Fact Checking and Corrections


Learn More: About Hackerstorm.com | FAQs

 

 


 

References

This article synthesizes publicly available disclosures, court records, and research from KnowBe4, DOJ filings, Mandiant, Gartner, Palo Alto Networks, Pindrop, Google Threat Intelligence Group, CrowdStrike, and Okta. Data reflects the most recent information available as of February 2026. This analysis is informational and does not constitute legal or regulatory advice.

 

 

By using this site, you agree to our Terms & Conditions.

COOKIE / PRIVACY POLICY: This website uses essential cookies required for basic site functionality. We also use analytics cookies to understand how the website is used. We do not use cookies for marketing or personalization, and we do not sell or share any personal data with third parties.

Terms & Privacy Policy