top of page


  • Marijan Hassan - Tech Journalist

NSA releases new advisory cautioning against deep fake technology

The National Security Agency (NSA) in partnership with U.S. federal agencies have issued new guidance on deepfakes. This new problem poses a significant cybersecurity challenge for National Security Systems (NSS), the Department of Defense (DoD), and organizations in the Defense Industrial Base (DIB).

The joint document called the "Cybersecurity Information Sheet (CSI) - Understanding the Threat of Deepfakes," is intended to assist organizations in recognizing, defending against, and responding to deepfake threats. The NSA took the lead in creating this document, with valuable contributions from the Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA).

The term "deepfake" encompasses multimedia that is either artificially generated or altered through machine learning or artificial intelligence technologies.

Candice Rockell Gerstner, an NSA Applied Research Mathematician specializing in Multimedia Forensics, points out that the methods for manipulating genuine multimedia have been around for some time. However, what's new is the ease and widespread use of these techniques by cyber actors, creating a fresh set of challenges for national security.

She emphasizes the importance of organizations and their employees being able to identify deepfake tactics and having a response plan ready in case of an attack.

The CSI offers several recommendations for organizations to counter deepfake threats. These include adopting real-time verification tools, passive detection methods, and safeguarding high-ranking officers and their communications. The guidance also emphasizes the significance of information sharing, preparation for dealing with exploitation attempts, and training personnel to mitigate the impact of deepfakes.

Deepfakes can harm an organization's reputation, impersonate leaders and financial officers, and support the use of deceitful communication to gain unauthorized access to an organization's networks, communications, and sensitive data.

Advances in computational power and deep learning have made it easier and cheaper to produce fake media on a large scale. "Organizations need to be vigilant, learn to recognize deepfake tactics, and have a solid plan in place to minimize the impact when confronted with an attack," Candice Rockell Gerstner said.


bottom of page