top of page


  • Marijan Hassan - Tech Journalist

Deepfake technology now a growing cybersecurity concern, expert warns

AI technology continues to be a critical driving force for future innovations, but malicious actors have again found a way to abuse the technology. Imagine being in a zoom meeting with a person that looks exactly like your CEO, sounds like your CEO but is not your CEO but rather an AI-generated person.

That’s the exact risk that deepfake technology is presenting for businesses right now. In a newly published paper, Microsoft’s Chief Science Officer Eric Horvitz has identified interactive and compositional deepfakes as two growing threats that organizations need to prepare for. According to Horvitz, the advances in deepfake technology are providing unprecedented tools that state and non-state threat actors can leverage to produce persuasive misinformation. The perfect tools for the perfect social engineering campaign.

In his paper, Horvitz notes that as time goes by it will become more difficult to differentiate deepfakes from reality as proved by the generative adversarial networks (GAN) methodology.

“The GAN methodology is an iterative technique where the machine learning and inference employed to generate synthetic content is pitted against systems that attempt to discriminate generated fictions from fact,” the paper read. Over time it was noted that the synthetic content generator learnt to fool the detector.

“With this process at the foundation of deepfakes, neither pattern recognition techniques nor humans will be able to reliably recognise deepfakes,” the paper continued.

It’s not the first time Horvitz is raising his concerns on the matter. Early this year, the chief science officer testified before the U.S. Senate Armed Services Committee Subcommittee on Cybersecurity where he emphasised the need for organisations to step up their security game to counter increased sophistication in cyber attacks including through the use of AI-powered synthetic media and deepfakes.

Horvitz notes that with these new developments there is a need to encourage media literacy and ensure that people are educated on these new trends.

Moreover, new authenticity protocols to confirm identity may need to be implemented including having people pass through a multi-factor identification process before getting admitted into a meeting.

“As we progress at the frontier of technological possibilities, we must continue to envision potential abuses of the technologies that we create and work to develop threat models, controls, and safeguards,” he concluded.


bottom of page