Generative AI

OpenAI deepfake detector ‘belated but welcome’


OpenAI’s introduction of a deepfake detector that can accurately identify images made with its Dall-E 3 generative AI system met a largely positive reception, even if it comes only five months before the U.S. presidential election.

The release of the detector earlier this week also comes as many other elections around the world are set to unfold this year and fears rise about deepfakes aimed at manipulating voter behavior.

“This is a belated but welcome move,” said Kashyap Kompella, CEO and analyst at RPA2AI Research. “Detection of AI-generated content is a wicked problem. It is not a problem that any one company can solve on its own.”

Election worries

Deepfakes for political deception circulated widely during the 2020 U.S. election. With the explosion of generative AI systems since then, the ease and quality of deepfakes as well as the ability to scale them have multiplied.

Meanwhile, OpenAI and other big GenAI vendors have been under intense pressure to ensure that their text, image and video-generating AI systems are safe and reliable.

Other vendors with widely used text-to-image generators include Stability.ai, Midjourney and Adobe.

The rising proliferation of deepfakes has amplified this concern. While the OpenAI image detector, which the vendor claims is 98% accurate and is now available for approved testers, addresses deepfake fears, it’s unclear who will use it and whether other GenAI vendors will follow suit with similar tools.

Other OpenAI moves

In addition to the OpenAI image detector, the vendor unveiled a $2 million education fund for nonprofits working to develop AI literacy among older adults and other groups vulnerable to deepfake manipulation. OpenAI established the fund with its partner, Microsoft.

In what’s been seen as an important step, OpenAI also joined the steering committee of the Coalition for Content Provenance and Authenticity (C2PA). The organization develops technical standards to address the spread of misleading information by identifying the source and provenance of media content.

“Better late than never. But the tech side of this is where it’s maybe a little lacking,” said Alon Yamin, CEO and co-founder of AI content detection vendor Copyleaks, referring to the OpenAI detector.

“We’re talking about detection in only the OpenAI platform. But we all know there are a lot of different other large language models out there that can create deepfakes and spread fake news and fake facts,” Yamin continued. “I think it’s a step in the right direction but definitely not a full solution.”

OpenAI did not respond to a request for comment about the capabilities and potential uses of the GenAI image detector.

Possible good timing

As for the election season, some see OpenAI’s effort as not too late but rather well-timed for the most intense final phase of campaigning and campaign spending.

In addition to the U.S. presidential election and congressional elections on Nov. 5, important elections in Africa, Asia, Europe and Australia are scheduled for later this spring and the fall and winter.

“When you think about election cycles, this is around the right time to release a tool like this,” Forrester Research analyst Audrey Chee-Read said. “It’s really the heightened part of the conversations about how voters are influenced, a few months before the election.”

GenAI literacy

The OpenAI and Microsoft Societal Resilience Fund is well-intentioned and aimed at the right constituencies, the analysts said.

However, the amount of money the tech companies allocated to the education fund is lacking compared to, for example, Microsoft’s investment of about $13 billion in OpenAI.

Better late than never. But the tech side of this is where it’s maybe a little lacking.
Alon YaminCEO, co-founder, Copyleaks

“When I first saw it, I was like, ‘Is it $2 million or $2 billion?'” Yamin said.

Efforts like this will ensure those who are not familiar with GenAI aren’t susceptible to deepfakes.

Initial grant recipients from the fund are Older Adults Technology Services from the AARP; C2PA; the International Institute for Democracy and Electoral Assistance; and the Partnership on AI.

“Educating populations and demographics that aren’t as well-versed in this technology is going to be very important,” Chee-Read said. “These are great organizations, but it’s going to be a tough job for them because AI technology is just a wide and complex topic.”

Shaun Sutner is senior news director for TechTarget Editorial’s information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience.



Source

Related Articles

Back to top button