Generative AI

Generative AI: Control Without Prohibition  


Over the past year, Artificial Intelligence with impressive capabilities has been available to the public. The idea of putting the development of AI on hold is naïve and illusory. But, there is an urgent need to define limits by putting a system of conformity assessment and standards in place. This control initiative is the responsibility of politicians. They can approach this task with confidence, thanks to the expertise the French Laboratory for Metrology and Testing (LNE) can provide. 

Swen Ribeiro, Research Engineer Artificial Intelligence Assessment and Cybersecurity Department at the LNE contributed to this op-ed.

In the field of artificial intelligence (AI), last year saw the launch of ChatGPT-4. This chatbot is impressive in its ability to converse with a user by convincingly reproducing the structure of a conversation.  

The Beginning of ChatGPT

Like DALL-E 2, Midjourney, and Stable Diffusion, which specialize in image production, ChatGPT is a so-called generative AI. It is based on learning algorithms. These are capable of carrying out a task after having been trained based on a very large quantity of data.

For example, an image-generating AI can draw a cat after having analyzed a large number of images of cats. Similarly, ChatGPT was confronted with a very large number of texts available on the Internet. As a result, it can guess the end of a sentence or text from its beginning. It can therefore mimic a conversation.

There is little documentation on the algorithm behind ChatGPT. However, we do know that the training data and the algorithm’s predictions have been thoroughly cleaned. The training process has been tightly controlled. Certain use cases have been refined by human supervision.  The result is AI responses that align with human expectations, including moral values and clear articulation.

RELATED ARTICLE

A Breakthrough

While ChatGPT-4 is the latest in a series of conversational AIs, it marks a significant breakthrough in performance and public awareness of AI capabilities.

But this awareness occurs amid a certain amount of confusion. In concrete terms, the general public supposes the veracity of ChatGPT’s answers based on their formal quality. However, this type of AI is not trained to make true statements but to respond in a fluid and natural way. Truth does not come into play here at all.

For example, in the case of biographies of well-known people, ChatGPT can answer completely false information. This can include wrong dates and links to non-existent sites. Its training allowed it to recognize that factual statements often include strings of characters starting with ‘www.’ However, it did not teach it the function of these strings, such as creating a hyperlink to an information source.

Compared with everything that has gone before in the field of AI, the breakthrough ultimately lies in the fact that our usual filters are now rendered inoperative: the form is so impressive that it no longer suffices to decide on the substance.  

Dangerous?

But this does not mean that these generative AIs are dangerous. When used by specialists in a particular field, they can help to gather, sort, or select information that needs to be verified elsewhere. We should view these AIs as a new type of search engine, enabling the articulation of multiple concepts in a single query like never before. Graphic designers are already using image-generating AIs to kick-start the creative process with proposals.

As to whether generative AI is the basis of a technological, societal, or human revolution, it’s hard to say.  At least we don’t see this revolution as being any more radical than the one that saw the emergence of the Internet or the smartphone. Will generative AI have this ability? Or will they, like electricity, change the way we produce? We still don’t know.  

One thing is certain: there is no turning back. And the very idea of even a temporary pause in the development of AIs strikes us as naive and illusory.  

RELATED ARTICLE

Control Initiative: the LNE

However, there is now an urgent need to control and limit the development of artificial intelligence, which is in the hands of private companies with colossal financial resources. It is not a question of competing with them, but of putting in place a system of conformity assessment and standards like those that exist for all products placed on the market.  

This control initiative is the responsibility of politicians, who can approach the task with confidence thanks to the expertise LNE can provide. We have already assessed over 1,000 AI systems for manufacturers and public authorities in sensitive fields such as medicine, defense, and autonomous vehicles, as well as in sectors like agri-food and Industry 4.0.

In 2021, LNE developed the first benchmark for the certification of AI processes to ensure that solutions comply with best practices in algorithmic development, data science, and business and regulatory considerations.

RELATED ARTICLE

LEIA

LNE is also currently working on the deployment of a unique infrastructure in Europe:  the LEIAs (the Artificial Intelligence Evaluation Laboratories)

LEIAs evaluate AI software and physical devices to ensure reliability, safety, and ethical standards, including fair information disclosure to users.

TEF

In addition, the LEIA project is one of the cornerstones of the LNE involvement in three European projects, in collaboration with more than 50 partners, control authorities, and manufacturers. The TEF (Testing and Experimentation Facilities) will develop Europe-wide testing methods and resources to qualify the performance, reliability, and robustness of AI systems and establish whether they are trustworthy.  

Through its state missions of protecting consumers, safeguarding industry competitiveness, and drawing on its expertise as an AI assessor and certifier, the LNE is positioning itself as a trusted third party able to support the growth of generative AI by helping to ensure their controllability and compliance with regulations.

The LNE believes in supporting innovation with mechanisms that guarantee trusted AI in France and Europe, rather than hindering it.

RELATED ARTICLE



Source

Related Articles

Back to top button