Meta to Label More AI-Generated Content, Remove Less
Meta has modified its approach to handling media that has been manipulated with artificial intelligence (AI) and by other means.
Three of the company’s platforms — Facebook, Instagram and Threads — will now label a wider range of content as “Made with AI” when they detect industry standard AI image indicators or when the people uploading content disclosed that it was generated with AI, Monika Bickert, vice president of content policy at Meta, wrote in a Friday (April 5) blog post.
The platforms will also add labels and context to manipulated media rather than risking unnecessarily restricting freedom of speech, according to the post.
These changes follow feedback from the company’s Oversight Board, public opinion surveys and consultations with academics, civil society organizations and others, the post said.
Previously, Meta’s policies only covered videos in which AI was used to make it appear that a person said something they didn’t say, per the post.
“As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do,” Bickert said in the post.
In addition, the company previously removed manipulated media that did not otherwise violate its Community Standards, according to the post.
“[The Board] recommended a ‘less restrictive’ approach to manipulated media like labels with context,” Bickert said.
Now, Meta will add “Made with AI” labels to AI-generated video, audio and images that either have industry-shared signals or have been disclosed by the person who uploaded them, per the post. The company already adds “Imagined with AI” labels to photorealistic images created with its Meta AI feature.
“If we determine that digitally created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context,” Bickert said.
The new policy on labeling AI-generated content will take effect in May, while the halt to removing content solely because of the existing policy on manipulated video will begin in July, according to the post.
PYMNTS Intelligence has found that regulators and policymakers are aware of the risks of AI-generated material and are working on how to reduce or eliminate the problem.
The technology is likely to become only more sophisticated, according to “Preparing for a Generative AI World,” a PYMNTS Intelligence and AI-ID collaboration.