AI

OpenAI takes steps to boost AI-generated content transparency


OpenAI is joining the Coalition for Content Provenance and Authenticity (C2PA) steering committee and will integrate the open standard’s metadata into its generative AI models to increase transparency around generated content.

The C2PA standard allows digital content to be certified with metadata proving its origins, whether created entirely by AI, edited using AI tools, or captured traditionally. OpenAI has already started adding C2PA metadata to images from its latest DALL-E 3 model output in ChatGPT and the OpenAI API. The metadata will be integrated into OpenAI’s upcoming video generation model Sora when launched more broadly.

“People can still create deceptive content without this information (or can remove it), but they cannot easily fake or alter this information, making it an important resource to build trust,” OpenAI explained.

The move comes amid growing concerns about the potential for AI-generated content to mislead voters ahead of major elections in the US, UK, and other countries this year. Authenticating AI-created media could help combat deepfakes and other manipulated content aimed at disinformation campaigns.

While technical measures help, OpenAI acknowledges that enabling content authenticity in practice requires collective action from platforms, creators, and content handlers to retain metadata for end consumers.

In addition to C2PA integration, OpenAI is developing new provenance methods like tamper-resistant watermarking for audio and image detection classifiers to identify AI-generated visuals.

OpenAI has opened applications for access to its DALL-E 3 image detection classifier through its Researcher Access Program. The tool predicts the likelihood an image originated from one of OpenAI’s models.

“Our goal is to enable independent research that assesses the classifier’s effectiveness, analyses its real-world application, surfaces relevant considerations for such use, and explores the characteristics of AI-generated content,” the company said.

Internal testing shows high accuracy distinguishing non-AI images from DALL-E 3 visuals, with around 98% of DALL-E images correctly identified and less than 0.5% of non-AI images incorrectly flagged. However, the classifier struggles more to differentiate between images produced by DALL-E and other generative AI models.

OpenAI has also incorporated watermarking into its Voice Engine custom voice model, currently in limited preview.

The company believes increased adoption of provenance standards will lead to metadata accompanying content through its full lifecycle to fill “a crucial gap in digital content authenticity practices.”

OpenAI is joining Microsoft to launch a $2 million societal resilience fund to support AI education and understanding, including through AARP, International IDEA, and the Partnership on AI.

“While technical solutions like the above give us active tools for our defences, effectively enabling content authenticity in practice will require collective action,” OpenAI states.

“Our efforts around provenance are just one part of a broader industry effort – many of our peer research labs and generative AI companies are also advancing research in this area. We commend these endeavours—the industry must collaborate and share insights to enhance our understanding and continue to promote transparency online.”

(Photo by Marc Sendra Martorell)

See also: Chuck Ros, SoftServe: Delivering transformative AI solutions responsibly

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, artificial intelligence, c2pa, ethics, genai, generative ai, openai, Society



Source

Related Articles

Back to top button