Meta accused of approving AI-manipulated political hate adverts

Meta is being accused of approving a series of AI-manipulated political adverts during India’s election that spread disinformation and incited religious violence.

A report shared with The Guardian said the Instagram, Facebook, Threads and WhatsApp owner approved adverts containing known slurs towards Muslims in India.

They apparently included promotions that said “Let’s burn this vermin” and “Hindu blood is spilling, these invaders must be burned”.

Another advert was said to have called for the execution of an opposition leader they falsely claimed wanted to “erase Hindus from India”.

According to the report, all of the adverts “were created based upon real hate speech and disinformation prevalent in India, underscoring the capacity of social media platforms to amplify existing harmful narratives”.

Researchers on the report seen by The Guardian showed researchers had submitted 22 adverts in English, Hindi, Bengali, Gujarati, and Kannada to Meta – of which 14 were approved.

Another three were given the green light after small tweaks were made that did not alter the overall provocative messaging.

Meta’s systems were found to have failed to detect that all of the approved adverts featured AI-manipulated images.

It comes despite a vow by the company to prevent AI-generated or manipulated content being spread on its platforms during the Indian election.

The ads were submitted midway through voting in India’s ongoing mega-election, which began in April and will finish in early June.

The election will decide if Hindu prime minister Narendra Modi – who has been accused of spreading anti-Muslim hate speech at rallies – and his nationalist Bharatiya Janata party will return to power for a third five-year term.

The adverts were created and submitted to Meta’s ad library by India Civil Watch International and campaign group Ekō, which aims to curb the power of corporations.

Maen Hammad from Eko said: “Supremacists, racists and autocrats know they can use hyper-targeted ads to spread vile hate speech, share images of mosques burning and push violent conspiracy theories – and Meta will gladly take their money, no questions asked.”

A Meta spokesperson said in response people who wanted to run ads about elections or politics “must go through the authorisation process required on our platforms and are responsible for complying with all applicable laws”.

The company added: “When we find content, including ads, that violates our community standards or community guidelines, we remove it, regardless of its creation mechanism. AI-generated content is also eligible to be reviewed and rated by our network of independent factcheckers – once a content is labelled as ‘altered’ we reduce the content’s distribution.

“We also require advertisers globally to disclose when they use AI or digital methods to create or alter a political or social issue ad in certain cases.”

© BANG Media International