Meta Will Label AI-Generated Content Starting In May
Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Tech Giant Asks Creators to Declare Content with ‘Made with AI’ Label
The company’s new policy requires content creators to self-declare they made audio, video and image content using generative AI. The company will also look for “industry standard AI image indicators” any time users upload content to Facebook, Instagram or Threads.
See Also: Generative AI Survey Result Analysis: Google Cloud
The change comes after the Meta Oversight Board in February urged the company to update its policy for “manipulated media,” writing that Facebook’s policy, formulated in 2020, was too permissive and too restrictive. That policy required Meta to remove content manipulated to show someone doing or saying something that they didn’t.
After four years of advancements in generative AI, it’s now “equally important to address manipulation that shows a person doing something they didn’t do,” wrote Meta Vice President of Content Policy Monika Bickert in a blog post.
Starting in July, Facebook will no longer take down content deepfake videos unless they violate other Meta community standards against voter interference, bullying and harassment, violence and incitement. The Oversight Board “argued that we unnecessarily risk restricting freedom of expression when we remove manipulated media that does not otherwise violate our Community Standards,” Bickers wrote.
The social media giant argued that leaving up and labeling AI-generated content, even if it has a “particularly high risk of materially deceiving the public on a matter of importance” is better than removing it. “This overall approach gives people more information about the content so they can better assess it and so they will have context if they see the same content elsewhere.”
Other technology giants have taken steps to make it easier to identify if a visual was manipulated or not. Google last year introduced an “about this image” feature to help users track its history in search results to determine if was manipulated or not. YouTube, like Meta, offers a self-labelling mechanism for creators, allowing them to check a box before posting, declaring that their media contains AI-generated or synthetic material.
Microsoft-backed industry body the Coalition for Content Provenance and Authenticity created a technical standard to add a “nutrition label” to visual and audio content called C2PA to encode details about the origins of a piece of content.
The move to mark AI-generated content has a measurable impact, with studies showing that social media users are less likely to believe or share content labeled as misleading.