Generative AI

NIST announces new initiative to create systems that can detect AI-generated content


The National Institute of Standards and Technology today announced it’s launching a new initiative called NIST GenAI aimed at assessing generative artificial intelligence models and create systems that can identify AI=created text, images and videos.

The launch of the new program came as NIST revealed its first draft publications on AI risks and standards.

NIST GenAI will work to create new AI benchmarks and attempt to build what it calls “content authenticity” detection systems that can detect AI-generated media such as text and “deepfake” videos. It’s an effort to counter the dangers of fake and misleading, AI-generated information.

In a press release, NIST said the new program will “issue a series of challenge problems” that are intended to evaluate the capabilities and limitations of generative AI models. Using these evaluations, it will then attempt to pursue strategies that can promote “information integrity and guide the safe and responsible use of digital content.”

According to its new website, NIST AI’s first project is an effort to build systems that can accurately identify if content is created by a human or an AI system, and it will begin with text. Although there are existing tools that claim to be able to detect things like deepfakes, which are videos manipulated by AI, various studies have shown that they’re not especially reliable.

That’s why NIST is inviting teams from the academic world, the AI industry and other researchers to submit what it calls “generators” and “discriminators.” The generators are AI systems that generate content, while the discriminators are those designed to identify AI-created content.

In the study, NIST asks that any submitted generators must generate a summary of 250 words or fewer on a given topic and a set of documents. Meanwhile, the discriminators will be tasked with detecting if summaries are created by humans or AI. NIST GenAI will prepare the test data by itself to ensure fairness. It added that systems trained on publicly available data that do not comply with applicable laws and regulations will not be accepted in the study.

NIST wants to move as fast as it can, and will open registration for the study on May 1, before closing on Augu. 2. The study will then commence, and the results will be delivered by February 2025, it said.

The study comes at a time when AI-generated misinformation and disinformation appears to be becoming more widespread. A recent study by the deepfake detection company Clarity Inc. found that the number of deepfakes published since the beginning of the year is up more than 900% compared to the same period in 2023.

The suggestion is that misleading content is becoming more of a problem as generative AI itself becomes more widely available. People have expressed concerns over the dangers of AI-generated content, with a recent poll by YouGov finding that 85% of Americans are worried about being misled by deepfakes.

Draft documents for AI policy

Besides launching the new program, NIST also published a series of drafts that are intended to shape the U.S. government’s policies around AI. The documents included a draft that aims to identify the risks of generative AI, as well as strategies for adopting the technology. It was created with input from a public working group made up of more than 2,500 researchers and experts, and will be used in conjunction with NIST’s existing AI risk management framework.

In addition, NIST published a draft companion resource for its existing Secure Software Development Framework that outlines best practices for the development of generative AI applications and dual-use foundational models. It defines dual-use foundational models as those trained on “broad data” and designed for “a wide range of contexts” that could potentially be “modified to exhibit high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.”

Other documents released by NIST today pertain to the reduction of risks associated with synthetic content, along with a plan for the development of global AI standards. All four of the documents are open for public comment until June 2.

Laurie Locascio, NIST director and undersecretary of Commerce for standards and technology, said the agency is worried that generative AI comes with risks that are “significantly different” from those seen in traditional kinds of software. “These guidance documents will not only inform software creators about these unique risks, but also help them develop ways to mitigate the risks while supporting innovation,” he said.

Today’s announcements are NIST’s response to U.S. President Joe Biden’s executive order on AI, which laid out rules requiring greater transparency from AI firms regarding how their models work. The order also established standards for labeling generative AI content and other things.

Image: Microsoft Designer

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU



Source

Related Articles

Back to top button