Keeping politics in check, Meta requires political advertisers to mark when deepfakes used

TASNIM LOKMAN
TASNIM LOKMAN
11 Nov 2023 11:53am
Illustration by Sinar Daily
Illustration by Sinar Daily
A
A
A

Tech-giant Meta will make it a requirement for political advertisers to flag when they use Artificial Intelligence (AI) or digital manipulation in adverts on social media platforms Facebook and Instagram.

From January onwards, adverts related to politics, elections or social issues will have to declare any digitally altered image or video.

The new policy will be applied globally and be moderated by a mix of human and AI fact checkers.

The social media company already has policies on using deepfakes in place but says “this goes a step further”. Deepfake is when AI is used to digitally alter a person’s appearance to look like somebody else in a video.

In its announcement, Meta said the flagging requirements would include changing what somebody has said in a video, altering images or footage of real events, and depicting real-looking people who do not exist.

Meta said users will be notified when adverts have been marked as being digitally changed but did not go into detail on how it would be presented.

Advertisers do not have to declare when small changes have been made, such as cropping or colour correction, "unless such changes are consequential or material to the claim, assertion, or issue raised in the ad".

Meta already has policies for all users - not just advertisers - about using deepfakes in videos. The deepfake content are removed if they "would likely mislead an average person to believe a subject of the video said words that they did not say".

Related Articles:

The new rules require adverts relating to politics, elections or social issues to disclose any kind of digital alteration, whether done by a human or AI, before the ad goes live on Facebook or Instagram.

Meta's other social media platform Threads, follows the same policies as Instagram.

It says that if advertisers do not declare this when they upload adverts, "we will reject the ad and repeated failure to disclose may result in penalties against the advertiser."

Google recently announced a similar policy on its platforms, while TikTok does not allow any political advertising.

The new policy will be a big testament for next year’s political scene as the world anticipates general elections for the United States, United Kingdom, India, Indonesia, Russia and South Africa.

Back in March this year, a fake picture created by AI tools of former US President Donald Trump falsely showing him being arrested was shared on social media. In the same month, a deepfake video of Ukrainian President Volodymyr Zelensky talking about surrendering to Russia was circulated online.

The opposite occurred in July however, when a video of US President Joe Biden claimed to be a deepfake was debunked and proven to be authentic. In the 17-second clip, Biden speaks about Jan 6 riots that saw protestors story the US Capitol building in 2021. The video was published on the Democrats’ official Twitter account and quickly became part of wider conspiracy theory.