Back to Insights

July 24, 2025

How AI-generated video can transform security awareness training

With the rapid rise of deepfake technology and increasing cyberattacks, traditional security training methods are falling short. AI-driven security awareness training is required. One part of this, is using AI-powered video tools like Google Vids and Veo3 to create dynamic, real-time training that enhances engagement and ensures security awareness keeps pace with emerging threats.

Security Training_ AI and Deepfakes

Executive summary: The case for agile security awareness

Security awareness training has remained relatively unchanged over the past decade; however, the rise of AI is now necessitating a much-needed update. Data indicates that human-related weaknesses contribute to 82% of all data breaches, and the volume of cyberattacks experienced by organizations surged by 75% year-over-year in the fourth quarter of 2024.

This escalating trend highlights a growing vulnerability within an organization’s human defenses. AI-driven deepfakes have emerged as the new frontier in social engineering, with a staggering 1,740% surge in fraud cases in North America between 2022 and 2023. This development necessitates a fundamental re-evaluation of how organizations educate their workforce.

The strategic adoption of AI-powered video tools, exemplified by platforms such as Google Vids integrated with Veo3 technology, can facilitate the rapid creation of dynamic, hyper-relevant, and highly engaging training content, enabling organizations to respond to emergent threats in real-time. The persistent rate at which human vulnerabilities are exploited, despite employees often believing training enhances job performance, indicates a fundamental flaw in the method of training delivery, rather than a lack of willingness to learn.

This situation reveals that while training possesses the inherent capacity to reduce risks significantly, traditional forms are failing to actualize this potential due to slowness and irrelevance. Employing AI in security awareness training cultivates an agile security culture, moving beyond outdated, compliance-driven exercises to foster continuous vigilance and awareness.

Why traditional training fails

Security frameworks and standards recognize security awareness training as a cornerstone of any information security program. However, its conventional implementation frequently falls short, leaving enterprises susceptible to an increasingly sophisticated array of cyberattacks. The fundamental challenge lies in the inherent inability of traditional training methodologies to keep pace with the rapid and dynamic evolution of modern cyber threats.

Despite considerable investments in technological cybersecurity defenses, the human element persistently remains the most significant and exploitable vulnerability. A striking 82% of data breaches have been directly linked to human-related security weaknesses, with other reports attributing between 80% and 88% of all data breaches to human error. This pervasive human susceptibility is further underscored by the continued effectiveness of common attack vectors such as phishing and spoofing, which affected approximately 298,000 individuals in 2023 alone.

Moreover, prevalent incidents, such as ransomware (accounting for 44% of incidents), business email compromise (27%), and network intrusions (24%), primarily rely on employee mistakes to succeed. The fact that 90% of security breaches originate from known threats, yet a vast majority are human-related, indicates that the issue is not a general lack of awareness regarding threat types. Instead, it points to a failure in applying existing knowledge or adapting to novel variations of these known threats (if you can call social engineering “novel”…). The human vulnerability is less about ignorance and more about behavioral gaps, information overload, and the inherent difficulty in translating static, theoretical knowledge into dynamic, real-time defensive actions. Consequently, traditional training, which often prioritizes foundational knowledge, appears to fall short in reinforcing secure behaviors and enabling agile responses.

Limitations and ineffectiveness of traditional training approaches

Employees, despite passing mandatory tests, often remain susceptible to sophisticated phishing scams when under pressure. A significant weakness lies in the prevalent “one-size-fits-all” training approach, which proves ineffective in fostering genuine behavioral change or embedding a pervasive security culture throughout an organization. For example, a study revealed no significant correlation between the recency of an employee’s annual cybersecurity training and their actual ability to avoid phishing traps; individuals who had just completed training performed no better in simulated attacks than those who had not received training for over a year.

Many employees dedicate less than a minute to embedded phishing training pages, clearly indicating a lack of interest and perceived relevance. While interactive training methods have shown marginal improvements, their overall protective effect remains modest against the sophistication of modern phishing attacks. Another critical challenge is the rapid decay of learned information; individuals tend to forget approximately half of all new information within a single hour of learning it. This rapid forgetting curve underscores the necessity to deliver training in short, multi-format sessions that is frequently repeated. Research suggests that training conducted every four months represents a “sweet spot” for achieving consistent results, with employee scores in phishing detection tests notably declining after six months. Furthermore, traditional training can overlook specific high-value targets and critical operational roles, such as executives, HR personnel, and OT personnel, who are often overwhelmed by busy schedules.

AI-powered video

A solution to the inherent limitations of static training lies in harnessing the power of AI to produce dynamic, on-demand content that can effectively keep pace with the relentless evolution of cyber threats. Tools such as Google Vids, powered by advanced AI models like Veo3, enable users to generate custom videos utilizing simple text prompts and eliminate the traditional requirement for specialized software or extensive video editing skills.

Veo3 demonstrates a nuanced understanding of prompts, even recognizing cinematic language such as “timelapse” or “aerial shot.” When Veo3 dropped, my X feed became much more entertaining. By eliminating the traditional bottlenecks associated with conventional video production, security teams can now rapidly create and disseminate professional, engaging training content. For instance, a custom explainer video detailing a newly discovered phishing tactic can be produced and distributed to the workforce within hours of its discovery. This shift decentralizes control and enhances agility, moving it directly from external vendors or specialized internal departments to the security function, thereby enabling a truly real-time training response to emerging threats.

Key benefits: Enhanced engagement, retention, and accessibility

AI-powered video creation demonstrably enhances both engagement and information retention, with studies indicating an increase of up to 60% in retention rates compared to text-based formats. The integration of virtual characters or conversational AI avatars, such as D-ID Agents, introduces a human touch to training videos. These avatars can function as presenters or simulated customers, fostering a more immersive and engaging learning experience. This capability enables employees to practice critical skills in a safe, simulated environment, such as de-escalation techniques with a disgruntled AI customer. AI tools can also automatically generate quizzes based on video content, transforming passive learning into interactive experiences that reinforce understanding and enable learners to test their knowledge effectively.

Personalization stands as a significant advantage: AI can analyze individual learner behavior, job roles, and past performance data to tailor content difficulty, recommend supplementary resources, or reorder modules, thereby creating highly personalized learning paths that include multi-language content translation.

Real-world agility: Responding to threats in real-time

The most profound impact of AI-generated video technology lies in its capacity to enable real-time responses to emerging cyber threats. Organizations can now generate dynamic, customized, and high-impact training content that precisely reflects what attackers are doing right now. This level of agility is indispensable for effectively addressing zero-day vulnerabilities or newly identified phishing tactics. Rather than enduring weeks or months for traditional training materials to be updated and distributed, a custom explainer video can be produced and disseminated within mere hours, visually demonstrating to employees exactly what the threat looks like and outlining the appropriate response protocols.

This capability for rapid creation and distribution fundamentally shifts security awareness from a reactive, scheduled, and generic model to a proactive, “just-in-time” learning paradigm. This directly addresses the critical problem that delayed training is ineffective training, allowing the training pipeline to match the speed of threat evolution and significantly narrow the window of organizational vulnerability.

AI tools further enhance this capability by allowing for precise tailoring of content to an organization’s specific context. AI’s capacity to tailor content with company branding, industry language, internal tools, and real-world workflows elevates training beyond abstract security advice, such as “don’t click suspicious links”. This hyper-contextualization is crucial for fostering genuine behavioral change, as it makes security directly relevant and applicable to an employee’s daily tasks and the organization’s unique threat profile, thereby significantly boosting engagement and retention. Moreover, AI can analyze historical security incidents to generate highly realistic attack simulations, preparing employees for advanced techniques and zero-day attacks. It can also seamlessly integrate with real-time threat intelligence reports, highlighting emerging vulnerabilities and automatically generating targeted training content to address current security challenges.

Deepfakes: The dark side of AI content creation

While AI offers immense potential for bolstering organizational defenses, it simultaneously powers a new generation of highly sophisticated deepfake attacks. These AI-generated forgeries demand a critical reorientation of security awareness priorities.

Deepfakes are highly realistic, fabricated audio, video, or images created using advanced generative AI technologies, primarily Generative Adversarial Networks (GANs) and autoencoders. These malicious creations exploit fundamental human trust by digitally impersonating individuals with alarming fidelity.

The proliferation of deepfake fraud is a cause for significant alarm. North America alone experienced a staggering 1,740% surge in deepfake fraud cases between 2022 and 2023, resulting in financial losses exceeding $200 million in the first quarter of 2025 alone. Globally, deepfake fraud attempts increased by 2,137% over a three-year period, accounting for 6.5% of all fraud attempts in 2023.

The accessibility of deepfake technology has effectively democratized fraud, significantly lowering the barrier to entry for malicious actors. Voice cloning, for instance, can now be achieved with as little as 20-30 seconds of audio, and highly convincing video deepfakes can be generated in just 45 minutes using readily available software. The scale of this threat is expanding exponentially, with deepfake videos increasing at a rate of 900% annually, while corresponding detection capabilities consistently lag. The rapid increase in deepfake generation, coupled with the lagging automated and human detection, means that technological defenses alone are insufficient, creating a paramount reliance on human vigilance as the ultimate line of defense.

Real-world incidents provide stark illustrations of the devastating potential of deepfake technology. In January 2024, the Arup engineering company suffered a $25.5 million loss due to a sophisticated AI-generated deepfake attack. A finance worker in Hong Kong participated in what they believed was a legitimate video call with their UK-based Chief Financial Officer and other familiar colleagues. Weeks later, the investigation uncovered that every participant on the call, except the victim, was an AI-generated deepfake. Seemingly familiar faces and voices deceived the finance worker in a video call, a medium typically considered reliable. Organizations can no longer rely on a default trust approach in digital interactions.

Equipping employees: How AI-powered training directly addresses deepfake threats

Given the inherent challenges and lagging capabilities in automated deepfake detection, educating employees (and employees educating their families) emerges as the most crucial first line of defense. Training programs must explicitly focus on making employees acutely aware of deepfakes and instructing them on how to identify specific indicators of manipulation. Employees require training to meticulously look for suspicious telltales, such as audio and video being out of synchronization, unusual eye movements, unnatural body movements, or inconsistent lighting and shadows within digital communication.

Critically, training should instill a respond rather than react mindset, i.e., teach employees to pause, critically assess the legitimacy of any request, and verify it through alternative, pre-established communication channels before taking any action. This includes actively encouraging multi-channel authentication for sensitive requests and implementing established “safe words” or callback procedures using pre-verified phone numbers.

Again, AI-powered training is uniquely positioned to deliver this specialized education. It can generate highly realistic deepfake scenarios and simulations, providing employees with a safe, controlled environment to practice identifying subtle anomalies. This approach transcends theoretical knowledge, fostering experiential learning that builds critical human detection skills, which are currently lacking. These simulations can be precisely tailored to mimic real-world threats and can even adapt dynamically based on an individual employee’s performance, offering personalized reinforcement and targeted remedial training.

Beyond direct training, AI-powered authentication technology can analyze incoming emails, photos, and videos for anomalies to assist in identifying deepfakes. Robust deepfake detectors should be seamlessly integrated into an organization’s broader security infrastructure.

Strategic recommendations for security teams

The pervasive human element in data breaches, coupled with the alarming rise of AI-powered deepfakes, necessitates a fundamental paradigm shift in how organizations approach employee education. Leveraging AI-powered video tools enables the rapid, on-demand creation of highly relevant, engaging, and personalized training content, directly addressing the critical limitations of outdated methods. The ability to deliver “just-in-time” training, tailored to specific emerging threats and individual employee needs, can reduce human error in the face of sophisticated social engineering attacks.

For now, I leave you with the following recommendations:

  • Invest in AI-powered video platforms: Prioritize the adoption of AI-driven video creation tools, such as Google Vids/Veo3 and Synthesia. This investment will enable rapid content generation, customization, and distribution, ensuring training can keep pace with the speed of threat evolution.
  • Shift to a “just-in-time” training model: Supplement foundational annual training with continuous, on-demand AI-generated content to allow for immediate response to zero-day threats and new attack vectors, providing employees with hyper-relevant information precisely when they need it.
  • Prioritize deepfake awareness training: Dedicate significant resources to educating employees on the mechanics and indicators of deepfake attacks. Utilize AI-powered simulations to provide realistic, experiential learning opportunities, training employees to identify subtle anomalies in audio and video.
  • Instill a “never trust, always verify” culture: Implement and reinforce robust multi-channel verification protocols for all sensitive communications and transactions. AI-powered training can embed these practices through repeated, realistic scenarios, making verification an ingrained habit rather than an optional step.
  • Tailor training to specific roles and contexts: Leverage AI’s personalization capabilities to create training content that is highly relevant to individual job roles, industry specifics, and internal workflows. This contextualization enhances engagement and drives stronger behavioral change, moving beyond generic awareness to specific actionable defense.
  • Integrate training with threat intelligence: Utilize AI to analyze emerging threat intelligence and automatically generate targeted training modules. Ensure that security awareness efforts are aligned with the most current and pressing risks facing the organization.
  • Measure behavioral change, not just compliance: Implement metrics that assess actual behavioral changes and reductions in human-related incidents. This represents a crucial shift, highlighting that the ultimate goal of agile security awareness is not merely to deliver training but to reduce organizational risk. By leveraging AI to analyze employee responses to simulated attacks and linking training effectiveness to actual reductions in human-related incidents, organizations can demonstrate a tangible return on investment and continuously optimize their human defense layer — a measurable investment in risk reduction.

Thanks for taking the time to read. I hope you found this informative and picked up some new insights!
To learn more about how Qubika can help, please reach out our team.

Works cited
1- https://insights.integrity360.com/what-is-deepfake-social-engineering-and-how-can-businesses-defend-against-it
2-https://www.crowdstrike.com/en-us/cybersecurity-101/social-engineering/deepfake-attack/
3-https://keepnetlabs.com/blog/security-awareness-training-statistics
4-https://blog.24by7security.com/cybersecurity-awareness-training-for-employees-pays-big-dividends-for-employers
5-https://secureframe.com/blog/cybersecurity-statistics
6-https://www.channele2e.com/native/your-security-training-isnt-wrong-the-content-is-just-outdated
7-https://cs.uchicago.edu/news/new-study-reveals-gaps-in-common-types-of-cybersecurity-training/
8-https://isgovern.com/blog/how-often-do-you-need-to-train-employees-on-cybersecurity-awareness/
9-https://www.eweek.com/artificial-intelligence/deepfake/

Cybersecurity Services

Qubika's Cybersecurity Studio builds the guardrails for innovation. With AI at the core, we provide a full suite of advanced services to protect your assets, streamline security, and ensure seamless business operations.

Learn more!
brian-liceaga
Brian Liceaga

By Brian Liceaga

SVP of Cybersecurity at Qubika

Brian Liceaga is SVP of Cybersecurity at Qubika, where he leads the company’s efforts in building secure, AI-powered applications and cybersecurity services. He joined Qubika following its acquisition of Nitra Security, the Nashville-based cybersecurity firm he founded. Known for its deep expertise in cybersecurity architecture and AI security, Nitra now plays a key role in enhancing Qubika’s AccelerateAI framework, including proactive vulnerability testing, incident triage, risk management, and preventing AI misuse.

News and things that inspire us

Receive regular updates about our latest work

Let’s work together

Get in touch with our experts to review your idea or product, and discuss options for the best approach

Get in touch