Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Meta now requires disclosure of AI use in political ads

Facebook and Instagram’s parent company, Meta, has announced a new policy that will require creators of political content to disclose if it had been created with the use of AI.

user icon Daniel Croft
Thu, 09 Nov 2023
Meta now requires disclosure of AI use in political ads
expand image

The policy, which will come into effect next year, will see the social media giant label posts with disclosures when an advertiser reveals a post has been created or altered using artificial intelligence (AI).

We’re announcing a new policy to help people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI,” said Meta on its Facebook blog.

“This policy will go into effect in the new year and will be required globally.”

============
============

While advertisers won’t need to disclose if the image has been altered in a way that doesn’t influence the argument the image makes, such as image resizing and sharpening, Meta has laid out the scenarios in which disclosure is required.

“Advertisers will have to disclose whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic-sounding audio, that was digitally created or altered to:

  • Depict a real person as saying or doing something they did not say or do; or
  • Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
  • Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.”

Taking to Meta’s Twitter clone Threads, former UK deputy prime minister and Meta’s president of global affairs Nick Clegg said that the new policy marks Meta’s push to prevent disinformation in the political space.

“This builds on Meta’s industry-leading transparency measures for political ads,” he said.

“These advertisers are required to complete an authorisation process and include a ‘Paid for by’ disclaimer on their ads, which are then stored in our public Ad Library for seven years.”

The move comes only days after Meta announced it would be banning political advertisers and campaigns from using its own generative AI advertising products.

“As we continue to test new generative AI ads creation tools in Ads Manager, advertisers running campaigns that qualify as ads for housing, employment or credit or social issues, elections, or politics, or related to health, pharmaceuticals or financial services aren’t currently permitted to use these generative AI features,” the company wrote.

“We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of generative AI in ads that relate to potentially sensitive topics in regulated industries.”

The use of generative AI tools, such as ChatGPT, DALL-E and Elon Musk’s new AI chatbot ‘Grok’, has raised major concerns over the generation of false information and its use in influencing major global decisions such as elections and political campaigns.

OpenAI chief executive Sam Altman spoke in front of US Congress earlier in the year, expressing his concerns with the unregulated development of AI, saying his main concerns related to its use to spread disinformation.

“My areas of greatest concern [are] the more general abilities for these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation,” he said.

“Given that we’re going to face an election next year, and these models are getting better, I think this is a significant area of concern, [and] I think there are a lot of policies that companies can voluntarily adopt.”

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.