AI: Deepfake threats in the U.S.

3D generated scene for artificial intelligence technology
Image: © themotioncloud | iStock

Open Access Government provides a brief overview of deepfake threats in the U.S. within the artificial intelligence landscape

In September 2023, the issue of deepfakes, also known as synthetic media threats, was addressed in a joint report by the National Security Agency (NSA) and U.S. federal agency partners. There is growing concern about the potential cybersecurity threat posed by this new technology to the Department of Defense (DoD), National Security Systems (NSS), and Defense Industrial Base (DIB) organizations. (1,2)

Responding to deepfakes threats

Deepfake is a term used to describe multimedia that has been artificially created or manipulated by a machine or by deep learning (artificial intelligence) technology. Candice Rockell Gerstner, NSA Applied Research Mathematician who specializes in Multimedia Forensics, said: “The tools and techniques for manipulating authentic multimedia are not new, but the ease and scale with which cyber actors are using these techniques are. This creates a new set of challenges to national security.”

Gerstner stresses the importance of organizations and those they employ to understand deepfake tactics and techniques. If targeted, they should have a solid plan in place to minimize the impact of a deepfake attack.

The Cybersecurity Information Sheet (CSI) (1,2) recommends using various technologies to “identify, defend against, and respond to deepfakes threats” and ascertain the origin of multimedia, including real-time verification capabilities, plus protecting high-priority officers.

The guidelines also include suggestions on how to reduce the impact of a deepfake, such as sharing information, planning and practising your response to exploitation efforts, and training your staff. In the view of CSI, synthetic media threats incorporate tactics that target brands, pose as financial officers and leaders, and use deceptive communications to gain access to organizations’ networks, communications channels, and sensitive data. Synthetic media can lead to public unrest by spreading disinformation about politics, society, the military, or the economy.

Safeguarding against evolving deepfake threats

In 2021, the NSA’s The Next Wave journal noted that numerous deep learning algorithms are accessible through open-source platforms such as GitHub. These ready-to-use repositories present a risk to national security because utilizing these technologies only necessitates a personal laptop and a basic level of technical expertise.

The NSA, the Federal Bureau of Investigation (FBI), and the Cybersecurity and Infrastructure Security Agency (CISA) are urging security experts to adopt the tactics outlined in the document to safeguard their companies against these evolving threats. (2)

Reference

  1. https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE- THREATS.PDF
  2. https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-Release-View/Article/3523329/nsa-us-federal-agencies-advise-on-deepfake-threats/

LEAVE A REPLY

Please enter your comment!
Please enter your name here