March 4, 2024 By Jonathan Reed 3 min read

It seems like only months ago deepfakes were still just a curiosity. Now, deepfakes are a real and present danger. And in an election year, the influence of AI-manipulated content could be disastrous.

During a recent Washington Post Live event, Anne Neuberger, deputy national security adviser for cyber and emerging technologies at the White House, commented on the rising risk of deepfakes. Incidents have already occurred, such as the recent fake-Biden robocall meant to discourage voters ahead of the New Hampshire primary.

What are the potential consequences of deepfake attacks in an election year? And could watermarking make a difference in mitigating deepfake attacks?

Ultra-real deepfakes are here

How realistic are deepfakes now? Consider the case of the clerk who fell for a deepfake while working for the Hong Kong branch of a multinational company. In January 2024, the clerk transferred HK$200 million (USD 25.58M) of the firm’s money to fraudsters after being tricked into joining a video conference where all the other participants were AI-generated deepfakes.

Acting senior police superintendent Baron Chan said, “I believe the fraudster downloaded videos in advance and then used artificial intelligence to add fake voices to use in the video conference.”

In another case, using a technique called audio-jacking, cyber researchers were able to modify the details of a live financial conversation occurring between two people with the assistance of generative AI. In this staged exchange, money was diverted to a fake adversarial account without the speakers realizing their call was compromised.

Meanwhile, AI itself can be fooled with prompt injection attacks that manipulate large language models (LLMs). This can result in tricking an LLM into performing unintended actions, circumventing content policies to generate misleading or harmful responses, or revealing sensitive information.

Can watermarking save the day?

AI watermarking works by embedding a unique signal into an artificial intelligence model’s output. This signal can be an image or text, and it’s intended to identify the content as AI-generated.

Some types of watermarks include:

  • Visible watermarks: Can be seen by the human eye, such as logos, images, copyrighted text and personal signatures.
  • Invisible watermarks: Cannot be seen and may utilize stenographic techniques and watermark extraction algorithms.
  • Public watermarks: Not secure and can be modified by anyone using certain algorithms.
  • Frequency and spatial watermarks: A form of domain watermarking that defines images as pixels. This provides improved watermarking quality and imperceptibility.

During the Washington Post event, Neuberger touched upon watermarking as a way to mitigate risks posed by deepfakes. She mentioned that watermarking could be effective for platforms that comply with mandates like the White House’s AI Executive Order. For example, on Facebook, any AI-generated content might display an icon that clearly states the content was generated with artificial intelligence.

While watermarking would be useful on compliant platforms, “there will always be platforms… that are not interested in being responsible. And for that, researchers and companies are looking at and need to do more to build the technology to identify what are deepfakes,” said Neuberger.

Election year impact

With approximately 4.2 billion people expected to vote in elections around the world in 2024, AI creators, scholars and politicians said in interviews that standards on the watermarking of AI-generated content must be established quickly. Otherwise, AI-generated fake content could have an impact on election results.

While standards would be welcome, nefarious actors and extremist or nuisance groups certainly won’t be watermarking their deepfakes. If anything, they will develop ways to hide or remove watermarks from their malicious content.

Perhaps the solution to AI deepfakes can be found in the cause. Maybe AI-driven deepfake detectors will be deployed by social media platforms. Or maybe, someday, you will be able to download an app that detects deepfakes for you.

More from News

White House cements CISA’s role as national coordinator for cybersecurity

2 min read - In 2013, the Obama Administration rolled out "The Presidential Policy Directive (PPD) on Critical Infrastructure Security and Resilience", a forerunner to the Cybersecurity and Infrastructure Security Agency (CISA), created "to strengthen and maintain secure, functioning and resilient critical infrastructure."The directive was groundbreaking in 2013, noting the importance of the rising risk of cyberattacks against critical infrastructure. But as cyber risks are constantly shifting, every cybersecurity program needs to be re-evaluated, and CISA is no exception. That’s why, in April 2024, President…

Debate rages over DMCA Section 1201 exemption for generative AI

3 min read - The Digital Millennium Copyright Act (DMCA) is a federal law that protects copyright holders from online theft. The DMCA covers music, movies, text and anything else under copyright. The DMCA also makes it illegal to hack technologies that copyright owners use to protect their works against infringement. These technologies can include encryption, password protection or other measures. These provisions are commonly referred to as the “Anti-Circumvention” provisions or “Section 1201”. Now, a fierce debate is brewing over whether to allow…

CISA Malware Next-Gen Analysis now available to public sector

2 min read - One of the main goals of the Cybersecurity and Infrastructure Security Agency (CISA) is to promote security collaboration across the public and private sectors. CISA firmly believes that partnerships and effective coordination are essential to maintaining critical infrastructure security and cyber resilience. In faithfulness to this mission, CISA is now offering the Malware Next-Generation Analysis program to businesses and other organizations. This service has been available to government and military workers since November 2023 but is now available to the…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today