Deepfaking it: America's 2024 election collides with AI boom

Political consultancies are also seeking to harness AI, further muddying the line between real and unreal.

 Florida Governor Ron DeSantis speaks during the Florida Family Policy Council Annual Dinner Gala, in Orlando, Florida, U.S., May 20, 2023. (photo credit: REUTERS/MARCO BELLO)
Florida Governor Ron DeSantis speaks during the Florida Family Policy Council Annual Dinner Gala, in Orlando, Florida, U.S., May 20, 2023.
(photo credit: REUTERS/MARCO BELLO)

"I actually like Ron DeSantis a lot," Hillary Clinton reveals in a surprise online endorsement video. "He's just the kind of guy this country needs, and I really mean that."

Joe Biden finally lets the mask slip, unleashing a cruel rant at a transgender person. "You will never be a real woman," the president snarls.

Welcome to America's 2024 presidential race, where reality is up for grabs.

The Clinton and Biden deep fakes - realistic yet fabricated videos created by AI algorithms trained on copious online footage - are among thousands surfacing on social media, blurring fact and fiction in the polarized world of U.S. politics.

While such synthetic media has been around for several years, it's been turbocharged over the past year by of a slew of new "generative AI" tools such as Midjourney that make it cheap and easy to create convincing deep fakes, according to Reuters interviews with about two dozen specialists in fields including AI, online misinformation and political activism.

"It's going to be very difficult for voters to distinguish the real from the fake. And you could just imagine how either Trump supporters or Biden supporters could use this technology to make the opponent look bad," said Darrell West, a senior fellow at the Brookings Institution's Center for Technology Innovation.

"There could be things that drop right before the election that nobody has a chance to take down."

Tools that can generate deep fakes are being released with few or imperfect guardrails to prevent harmful misinformation as the tech sector engages in an AI arms race, said Aza Raskin, co-founder of the Center for Human Technology, a nonprofit that studies technology's impact on society.

Former President Donald Trump, who will vie with DeSantis and others for the Republican nomination to face Biden, himself shared a doctored video of CNN anchor Anderson Cooper earlier this month on his social media platform Truth Social.

Advertisement

"That was President Donald J. Trump ripping us a new asshole here on CNN's live presidential townhall," Cooper says in the footage, although the words don't match his lip movement.


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


CNN said the video was a deep fake. A representative for Trump didn't respond to a request for comment on the clip, which was still on his son Donald Jr's Twitter page this week.

While major social media platforms like Facebook, Twitter, and YouTube have made efforts to prohibit and remove deep fakes, their effectiveness at policing such content varies.

Deep fake Pence, not Trump

There have been three times as many video deep fakes of all kinds and eight times as many voice deep fakes posted online this year compared to the same time period in 2022, according to DeepMedia, a company working on tools to detect synthetic media.

"It's going to be very difficult for voters to distinguish the real from the fake. And you could just imagine how either Trump supporters or Biden supporters could use this technology to make the opponent look bad."

Darrell West, senior fellow at the Brookings Institutions Center for Technology Innovation.

In total, about 500,000 video and voice deep fakes will be shared on social media sites globally in 2023, DeepMedia estimates. Cloning a voice used to cost $10,000 in server and AI-training costs up until late last year, but now startups offer it for a few dollars, it says.

No one is certain where the generative AI road leads or how to effectively guard against its power for mass misinformation, according to the people interviewed.

Industry leader OpenAI, which has changed the game in recent months with its release of ChatGPT and the updated model GPT-4, is itself grappling with the issue. CEO Sam Altman told Congress this month that election integrity was a "significant area of concern" and urged rapid regulation of the sector.

Unlike some smaller startups, OpenAI has taken steps to restrict the use of its products in politics, according to a Reuters analysis of the terms of use of half a dozen leading companies offering generative-AI services.

The guardrails have gaps, though.

For example, OpenAI says it prohibits its image generator DALL-E from creating public figures - and indeed, when Reuters tried to create images of Trump and Biden, the request was blocked and a message appeared saying it "may not follow our content policy."

Yet Reuters was able to create images of at least a dozen other U.S. politicians, including former Vice-President Mike Pence, who is also weighing a White House run for 2024.

OpenAI also restricts any "scaled" usage of its products for political purposes. That bans the use of its AI to send out mass personalized emails to constituents, for example.

The company, which is backed by Microsoft, explained its political policies to Reuters in an interview but didn't respond to further requests for comment around enforcement gaps in its policies, such as blocking the image creation of politicians.

Several smaller startups have no explicit restrictions on political content.

Midjourney, which launched last year, is the leading player in AI-generated images, with 16 million users on its official Discord server. The app, which ranges from free to $60 a month depending on factors such as picture quantity and speed, is a favorite of AI designers and artists due to its ability to generate hyper-realistic images of celebrities and politicians, according to four AI researchers and creators interviewed.

Midjourney didn't respond to a request for comment for this article. During an online chat on Discord last week, CEO David Holz said the company would likely make changes ahead of the election to combat misinformation.

Midjourney wants to cooperate on an industry solution to enable traceability of AI-generated images with a digital equivalent of watermarking and would consider blocking images of political candidates, Holz added.

Republican AI-generated advertisements

Even as the industry wrestles with how to prevent misuse, some political players are themselves seeking to harness the power of generative AI to soup up campaigns.

So far, the only high-profile AI-generated political ad in the U.S. was one published by the Republican National Committee in late April. The 30-second ad, which the RNC disclosed as being entirely generated by AI, used fake images to suggest a cataclysmic scenario should Biden be reelected, with China invading Taiwan and San Francisco being shut down by crime.

The RNC didn't respond to requests for comment on the ad or its wider use of AI. The Democratic National Committee declined to comment on its use of the technology.

Reuters polled all the Republican presidential campaigns on their use of AI. Most did not reply, although Nikki Haley's team said they were not using the technology and longshot candidate Perry Johnson's campaign said it was using AI for "copy generation and iteration," without giving further details.

The potential for generative AI to produce campaign emails, posts and adverts is irresistible for some activists who feel the low-cost tech could level the playing field in elections.

Even deep in rural Hillsdale, Michigan, machine intelligence is on the march.

Jon Smith, Republican chair for Michigan's 5th Congressional district, is holding several educational meetings so his allies can learn to use AI for social media and ad generation.

"AI helps us play against the big cats," he said. "I see the biggest upswing in the local races. Someone who is 65 years old, a farmer and county commissioner, he could easily be primaried by a younger cat using the technology."

Political consultancies are also seeking to harness AI, further muddying the line between real and unreal.

Numinar Analytics, a political data company that focuses on Republican clients, has begun experimenting with AI content generation for audio and images, as well as voice generation to potentially create personalized messaging in a candidate's voice, founder Will Long said in an interview.

Democratic polling and strategy group Honan Strategy Group is meanwhile trying to develop an AI survey bot. It hopes to unroll a female bot in time for the 2023 municipal elections, CEO Bradley Honan said, citing research that both men and women are more likely to speak to a female interviewer.