An investigation by the advocacy group Global Witness revealed on Thursday that TikTok and Facebook approved advertisements containing blatant falsehoods about the U.S. election just weeks before the vote, raising concerns about the tech platforms’ policies for detecting harmful disinformation.
Global Witness submitted eight ads with false election claims to TikTok, owned by China, Facebook, owned by Meta, and YouTube, owned by Google, to evaluate their ad systems during the lead-up to the November 5 election. The ads included outright falsehoods, such as the claim that people can vote online, along with content promoting voter suppression, inciting violence against a candidate, and threatening electoral workers and processes.
According to Global Witness, TikTok “performed the worst,” approving four of the ads despite its policy prohibiting all political ads. Facebook approved one of the submitted ads.
“Days away from a tightly fought U.S. presidential race, it is shocking that social media companies are still approving thoroughly debunked and blatant disinformation on their platforms,” stated Ava Lee, the digital threats campaign leader at Global Witness.
The study comes as researchers warn of the increasing dangers posed by disinformation—both from domestic actors and foreign influence operations—during a closely contested election between Democratic contender Vice President Kamala Harris and Republican nominee Donald Trump.
“In 2024, everyone knows the danger of electoral disinformation and how important it is to have quality content moderation in place,” Lee emphasized. “There’s no excuse for these platforms to still be putting democratic processes at risk.”
– Growing scrutiny –
A TikTok spokeswoman said four of those ads were “incorrectly approved during the first stage of moderation.”
“We do not allow political advertising and will continue to enforce this policy on an ongoing basis,” she told AFP.
A Meta spokeswoman pushed back against the findings, saying they were based on a small sample of ads and therefore “not reflective of how we enforce our policies at scale.”
“Protecting the 2024 elections online is one of our top priorities,” she added.
Global Witness said the ad approved by Facebook falsely claimed that only people with a valid driver’s license can vote.
Several US states require voters provide a photo ID, but do not say that it must be a driver’s license.
Global Witness said YouTube initially approved half of the ads submitted, but blocked their publication until formal identification, such as a passport or driver’s license, was provided.
The watchdog called that a “significantly more robust barrier for disinformation-spreaders” compared to the other platforms.
Platforms are facing growing scrutiny following the chaotic spread of disinformation in the aftermath of the 2020 election, with Trump and his supporters challenging the outcome after his defeat to Joe Biden.
Google on Thursday said it will “temporarily pause ads” related to the elections after the last polls close on November 5.
The tech giant said the measure, also introduced during the 2020 election, was expected to last a few weeks and was being implemented “out of an abundance of caution and to limit the potential for confusion,” given the likelihood that vote counting will continue after Election Day.
Separately, Meta has said it will block new political ads during the final week of the election campaign.