IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How to Spot Political Deepfake Ads This Year

While deepfakes are evolving and do pose a threat to this year’s elections, they are not without flaws. With a discerning eye, it’s often possible to identify falsified video, photos or audio recordings of politicians.

deepfake_shutterstock_1430571869
(TNS) — In January, a robocall seemingly recorded by President Joe Biden went out to New Hampshire residents, urging them not to vote in the upcoming primary election.

Days later, a 10-second audio clip that sounded like a prominent New York Democrat caught in an embarrassing hot mic moment spread through the city’s political class.

Neither of these recordings were real, though. They’re just two examples of deepfakes — videos, images or audio fabricated with artificial intelligence — that have impersonated American politicians in recent times.

These phony clips — which can appear uncannily real and convincing — could pose a major problem for voters in this election cycle, according to AI experts.

“The potential for deepfakes to escalate tensions, influence electoral outcomes, and erode confidence in political figures and institutions is a clear and present danger,” Yisroel Mirsky, head of the Offensive AI Research Lab at Ben-Gurion University in Israel, told McClatchy News.

However, deepfakes are not without flaws, and with a discerning eye, they can often be identified.

Here are some things to look for when attempting to determine whether video, photo or audio recordings of politicians are legitimate.

What to look for

There are often a number of tell-tale technical imperfections associated with deepfakes, experts said.

Often in fabricated videos, the sound is not entirely in sync with the speaker’s lip movements, creating an audio-visual mismatch, Markus Appel, a professor of new media at the University of Würzburg in Germany, said.

Further, certain expressions, such as head movements that appear unnatural, could suggest a video is fake, Appel, author of a 2022 study on detecting deepfakes, said.

“Deepfakes often struggle to perfectly replicate natural human expressions, especially around the edges of the head and mouth,” Mirsky said.

It’s also important to scrutinize the lighting, or lack thereof, in videos, he said. Deepfakes often contain a subject artificially imposed onto a background, which could create inconsistencies with lights and shadows.

Additionally, experts said, the substance of what a speaker is saying should be examined for irregularities.

If the speaker is making unusual or out of character claims, then it may signal foul play, Mirsky said.

“How likely is it that this person/politician would say/do these things?” Appel said, noting that “a politician recommending citizens not to vote is highly unlikely.”

Further, voters should rely on the context surrounding the clips in question, asking themselves: Where was the clip disseminated? and Were there warnings made about the possibility of deepfakes? Appel said.

As a general rule, sensational clips should be met with a “healthy dose of skepticism” and verified via more than one reputable news source, Mirsky said.

And if there are suspicions of fakery, viewers or listeners should refrain from immediately sharing the content online, Appel said.

Instead, they should report the content to the platforms hosting it, such as X or Facebook, Mirsky said.

As time goes on, deepfakes will only become more difficult to recognize, but the challenge posed by them is not impossible to overcome, he said.

“By promoting awareness, encouraging critical engagement with digital content, and leveraging technological advancements to detect and mitigate these threats, we can protect the integrity of the democratic processes,” he said.

© 2024 The Charlotte Observer. Distributed by Tribune Content Agency, LLC.