AI

Artificial Intelligence : « French researchers impress me! » explains James Manyika (Google)



Securing an hour in James Manyika’s ultra-packed schedule is like making an appointment with several people simultaneously. As the senior vice president in charge of research, technology, and society at Google, he oversees projects related to quantum computing, nuclear fusion, and a large portion of artificial intelligence. Born fifty-eight years ago in Zimbabwe, he reflects on the impact of this research on society. Capable of eradicating repetitive tasks, detecting certain pathologies, and preventing natural disasters, AI is shaping up to be a hurricane that will upend the future of work and geopolitics. This is a subject that fascinates James Manyika, who has long led the McKinsey Global Institute.

Le point du soir

Tous les soirs à partir de 18h

Recevez l’information analysée et décryptée par la rédaction du Point.

Merci !
Votre inscription a bien été prise en compte avec l’adresse email :

Pour découvrir toutes nos autres newsletters, rendez-vous ici : MonCompte

En vous inscrivant, vous acceptez les conditions générales d’utilisations et notre politique de confidentialité.

This technology also has the appearance of a narcissistic wound for mankind. How should humans react when their creation surpasses them in a growing number of cognitive tasks ? What happens when AI is giving wrong answers and hallucinations ? As a member of the Aspen Institute and holder of a master’s degree in artificial intelligence from Oxford, Manyika also advises the UN on the subject. Above all, whereas a seminal research paper had been written by several of the researchers who were then working for the company, Google can no longer remain idle. The search engine, long cautious about this technology prone to biases and errors, must now face competition from Chinese companies like Baidu, Microsoft, and even startups such as OpenAI, Anthropic, and Perplexity, which are innovating with smaller teams at an unprecedented pace.

Le Point : To what extent does the advent of artificial intelligence represent a milestone in human history ?

James Manyika :AI is revolutionary, first because it can be used in all areas of society and the economy. Second, because it is the first technology capable of accomplishing cognitive tasks that we believed were reserved for humans, such as intuitive reasoning and creativity. This is a major historical advance ! As such, it will have a broader and deeper impact than electricity and the steam engine.

In what field is this revolution already palpable ?

Take biomedicine. In just the last three weeks, at Google, we have revealed major innovations. We first introduced Med-Gemini, a language model specially tuned for medical reasoning, capable of cross-referencing a patient’s data with complex medical corpora and self-improvement. The following week, we announced AlphaFold 3, an AI model that allows modeling of almost all molecular shapes, from proteins to DNA, by analyzing RNA and their interactions. Previous versions of this AI model are already used by more than 1.8 million scientists worldwide. And finally, last week, the Google Research Connectomics team released work, conducted notably with researchers from Harvard, that resulted in the most advanced 3D reproduction of synapses and neurons in the human brain. The team published an article in the journal Science on the detailed reconstruction of a brain region, highlighting new neuronal structures that will help researchers understand certain neurological disorders.À LIRE AUSSI Yuval Noah Harari (Sapiens) versus Yann Le Cun (Meta) on artificial intelligence

Can these programs be considered truly intelligent ?

Your question leads us to reflect on what human intelligence is. However, our perception of intelligence has evolved over time. In the past, being intelligent meant being able to do mental calculations quickly. This was the case when I was a child in Zimbabwe. The arrival of calculators disrupted this view. Next we valued memorization. People who could remember a lot of information were seen as intelligent. Nowadays, we place more emphasis on critical thinking, the ability to analyze, reason and argue. We often use essays to assess students’ intelligence, as they must show that they are able to structure their ideas. However, with current advances, even this definition of intelligence is being challenged since. AI systems are now capable of writing coherent essays.

Silicon Valley, is passionate about “artificial general intelligence” (AGI), an AI capable of surpassing human cognitive functions. Sam Altman, co-creator of OpenAI, is a proponent. This can be scary… 

There are two ways to consider AGI. The first, as an intelligence capable of accomplishing a wide range of human tasks, such as reasoning, problem solving, natural language understanding or creativity in poetry. This means that AI could perform any cognitive task that we can do, as efficiently as we can. We are very close to that today. The second definition includes capabilities for self-improvement, autonomous learning, or even making independent decisions. That is the ability to define one’s own goals and choose what to work on. Some even mention consciousness, although this is a highly debated and complex concept. For them, AGI implies autonomous behavior that goes beyond simply performing a variety of tasks. If you consider this definition, I believe we are still a long way off, and additional scientific breakthroughs will be required.

What are these scientific challenges ?

We need to improve reasoning, memory, planning, and the ability of AI to perform “out-of-distribution” reasoning tasks, i.e., tasks for which it has not been specifically trained. In other words, AI must be able to apply its skills to totally new and unknown contexts. This is an area where current systems still show limitations, as they are often dependent on the data they were trained on. Finally, AI systems must be able to understand and interact with their environment in a more natural and intuitive way. This includes sensory perception, object manipulation, and navigation in physical spaces. Progress in this area requires advances in robotics, computer vision, and natural language processing to enable smoother and more efficient interaction with the real world.

This is what Yann LeCun, the head of artificial intelligence at Meta, aims to do with the JEPA model, which wants to give machines a grip on the outside world…

Yes, we need to work on models that better understand context and can adapt to new or unforeseen situations.

Many intellectual professionals fear seeing their jobs disappear…

It reminds me of my doctoral years in artificial intelligence, already twenty-eight years ago… Fifteen years ago, when you asked this question to a researcher or an economist, they would answer, “Oh yes, AI and automation will be great because they can perform all the repetitive tasks, the ones for which we can write precise instructions. But they will never be able to access intuitive reasoning, creative work.” But look where are we now !

À LIRE AUSSI De Google à Open AI, les vérités de Demis HassabisNo one seems safe from having their work replaced by machines, not even Google, which recently carried out layoffs…

Yes, we have restructured. We need to ensure that our teams have the necessary skills to work alongside AI and use these tools effectively. This involves continuous training. Some tasks will be automated, new jobs will be created, and the nature of many jobs will change thanks to AI. Problem-solving skills and the ability to work together will also be fundamental in the future.

Some also fear that the machine will turn against humans…

It is important that the goals of AI correspond to our goals, to us humans. This is what is called alignment. It is crucial that humans can control these systems. It is also crucial to find solutions for professions where wages could be pulled down. I understand these concerns, but it is important to note that AI will also bring many benefits.

AI, which requires great computing power, consumes a lot of energy…

AI models, especially those based on Transformer models [a language understanding tool that has enabled text generation, Editor’s note], consume a lot of energy. This is due to the fact that the complexity of the calculations increases quadratically with the size of the models and the amount of data used to train them. We are working on smaller, more efficient models to reduce this consumption. For example, Gemini exists in four sizes, including Nano, and we are trying to invent even smaller versions.

Could nuclear fusion help solve some of the energy challenges ?

Yes, and paradoxically, it is AI that is helping us develop this energy of the future ! We have conducted research on plasma confinement in tokamaks, a technology that allows the fusion of two atoms by creating more energy than was consumed, as was recently the case in Switzerland, where Google DeepMind and the Swiss Plasma Center at the École Polytechnique Fédérale de Lausanne (EPFL) are working together. Thanks to AI, we have been able to show that it is possible to confine a high-energy plasma field more efficiently. This could have major implications for the future of energy.

Will quantum computing allow us to meet the growing computational power needs of AI in time ?

Quantum computing will provide faster and more efficient computations. We have defined six technical milestones to arrive at a fully fault-tolerant quantum computer. We reached the first one in 2019: it consisted of showing that the quantum computer can solve infinitely more complex problems than any architecture based on Turing-Von Neumann, who laid the foundations of current computing as early as 1945. We did a simulation in 2019. And last year, we demonstrated error correction. That was the second milestone. We still have milestones three, four, five, six. Some of these milestones are engineering challenges. Others are scientific breakthroughs. It will happen in a period that I estimate to be between five years and… a century.

Mastery of AI gives an immense strategic advantage, but it is crucial that this mastery be shared to avoid overly large global imbalances

To what extent is AI defining a new world order ? In 2017, Vladimir Putin stated that the country that masters AI will be the ruler of the world…

Within the UN High-Level Advisory Body on AI, we formed a group of 39 members from 33 countries. What is striking is that the countries of the Global South are much more optimistic than Europe or North America… AI is seen there, for example, as a way to solve the shortage of teachers or medical practitioners. However, the development, deployment, and use of AI are largely dominated by a few Northern countries, through a few large companies. To answer Vladimir Putin’s prediction, yes, mastery of AI gives an immense strategic advantage but it is crucial that this mastery be shared to avoid overly large global imbalances.

Isn’t Africa the big forgotten continent of artificial intelligence ?

Lack of access to resources such as data, computing power, and advanced models is a major concern on the African continent. The same goes for access to electricity. To correct this, it is essential to create policies that allow all regions of the world to participate in this technological revolution.

Where does France stand in this technological race ?

French researchers impress me ! In fact, we have many of them in Google’s ranks. Some French people are at the origin of extraordinary adventures, such as Mistral AI or Hugging Face. We must create the conditions for them to take off !

Video generators are becoming more and more impressive, such as Google’s Veo or OpenAI’s Sora. The distorted perception of truth can be dangerous for democracies…

It is essential to ensure transparency about the origin of AI-generated content. Techniques such as watermarking [a technology that allows depositing an imperceptible watermark in a video, Editor’s note] can help distinguish between machine-generated and real content. In addition, clear regulations are essential to ensure that all stakeholders follow the same rules. Society must adapt and learn to discern the truth in a digital world, by developing discernment skills and relying on critical thinking, as is sometimes the case on social networks.

“Open source in AI is complex”

For some eminent researchers, such as Yoshua Bengio, Turing Prize winner, allowing everyone to have access to software in “open source” amounts to spreading the recipe for the atomic bomb…

Open source has always been at the heart of technological innovation. At Google, this has resulted in projects like Android, which allows innovators to develop, test, and improve the functioning of mobile phones. For this reason, we have open-sourced parts of our state-of-the-art AI model Gemma, which offers state-of-the-art performance on various tasks such as answering questions. That said, the question of open source in AI is complex. On the one hand, this approach is great because it allows researchers and entrepreneurs to access the tools needed to innovate. On the other hand, what will happen if it falls into the hands of malicious actors ? For now, we don’t see any major risks. But I emphasize for now ! This could change as the technology progresses…

What should be taught to children to prepare them for the future ?

It all depends on their age ! It is important to teach science, mathematics and the arts. Emphasis should also be placed on understanding concepts and creativity. Problem-solving skills, critical thinking, and collaboration will also be crucial. Regarding programming, it depends. My son is 22 years old, and I have always encouraged him to learn to program. But if he were 3 years old, I might answer this question differently… Because programming is becoming easier and easier for machines. So, if my daughter or son seemed to be an average programmer, I might tell them not to focus on that. It is important to note that current systems can handle programming relatively simple tasks, but for more complex tasks, they are not quite at that level yet. Deep human understanding is still needed. So, if they were of the caliber of Jeff Dean [Google’s Chief Scientist, Editor’s note], I would tell them to learn to program, because they could go beyond the current capabilities of AI systems.

Competition has never been so fierce… Anthropic has just launched its Claude model in Europe, and OpenAI made multiple announcements the day before your annual developer conference…

As Sundar Pichai [Google’s CEO, Editor’s note, see below] pointed out, we advance at our own pace and we think long-term. For us, the real race is the one where we get the best results and develop the technology responsibly. It’s about making the technology as useful as possible to society while properly managing the risks. That’s the race we want to win.

What more do you expect from AI ?

AI can accelerate research in genomics, materials science and physics. A great challenge is to see if AI can solve scientific problems that are considered unsolvable or create mathematical theories, I find that exciting !

Mapping neurons

Connectomics, the branch of Google Research specialized in mapping brain connections, has generated the most detailed 3D reproduction to date of synapses and neurons in a human brain. This structure works notably with the Max Planck Institute and Harvard. In the long run, this tool could help researchers better understand and treat neurological disorders.

Modeling proteins

An artificial intelligence software based on Google’s deep learning tools, AlphaFold3 is capable of predicting what a protein will look like from its sequence alone and drawing its 3D structure. It can also predict the structure and interactions of almost all molecules. This tool is already being used by some laboratories to accelerate drug development.

Ultra-personalized assistant

Imagine an AI that finds your mug, corrects computer code written on a board, or suggests the name of a stuffed animal… These are some of the capabilities of Project Astra, presented at Google I/O, an annual conference reserved for its developers. This new conversational agent in natural language, based on Gemini (see below) and still in prototype stage, relies on videos filmed by your smartphone to provide its responses.

Vision. Sundar Pichai, CEO of Google, has restructured the company’s artificial intelligence division.

The battle intensifies among conversational agents

GeminiThe Versatile One

Google’s generative and multimodal AI service, specialized in deep learning, exists in four sizes, including a Nano version that can run offline on smartphones.

PerplexityThe Outsider

The little thumb of AI wants to stand out from the industry giants by focusing on the most comprehensive possible research, scientifically credible and with listed sources.

Claude The Literary One

Claude 3, the new version of the American company Anthropic’s chatbot, is now accessible in Europe and offers three power models: Haiku, Sonnet, and Opus.

LLaMA The Discreet One

Facebook’s parent company has just released version 3 of its language model, available in two sizes according to the number of parameters used (up to 70 billion) in its responses.

Grok The Rebel Developed at the initiative of Elon Musk and accessible in France, Grok is a generative AI linked to the social network X. With an offbeat tone, the tool is available in “fun” or “classic” mode.

ChatGPT The Creative One

Unveiled on May 13, the new software from OpenAI amazed users with its capabilities. The chatbot converses and translates in real-time while integrating an emotional dimension into the voice.




Source

Related Articles

Back to top button