Artificial Intelligence Must Not Replace Human Decision-making
After an already full morning, including audiences with the president of Cape Verde and more than 100 comedians from around the world, Pope Francis flew by helicopter to Borgo Egnazia, the luxury resort where the G7 meeting is being held.
Pope Francis will arrive back at the Vatican around 9 p.m. local time after a helicopter ride of about an hour and a half.
The Vatican has been heavily involved in the conversation on artificial intelligence ethics, hosting high-level discussions with scientists and tech executives on the ethics of artificial intelligence in 2016 and 2020.
In his remarks at the G7 on Friday, Francis also highlighted some specific limitations of AI, including the ability to predict human behavior.
(Story continues below)
He described the use of artificial intelligence in the judicial system to analyze data about a prisoner’s ethnicity, type of offense, behavior in prison, and more to judge their suitability for house arrest over imprisonment.
“Human beings are always developing and are capable of surprising us by their actions. This is something that a machine cannot take into account,” he said.
He criticized “generative artificial intelligence,” which he said can be especially appealing to students today, who may even use it to compose papers.
“Yet they forget that, strictly speaking, so-called generative artificial intelligence is not really ‘generative.’ Instead, it searches big data for information and puts it together in the style required of it. It does not develop new analyses or concepts but repeats those that it finds, giving them an appealing form,” the pontiff said.
“Then, the more it finds a repeated notion or hypothesis, the more it considers it legitimate and valid. Rather than being ‘generative,’ then, it is instead ‘reinforcing’ in the sense that it rearranges existing content, helping to consolidate it, often without checking whether it contains errors or preconceptions.”
This runs the risk of undermining culture and the educational process by reinforcing “fake news” or a dominant narrative, he continued, noting that “education should provide students with the possibility of authentic reflection, yet it runs the risk of being reduced to a repetition of notions, which will increasingly be evaluated as unobjectionable, simply because of their constant repetition.”
He also pointed out the increasing use of AI programs, like chatbots, that interact directly with people in ways that can even be pleasant and reassuring, since they are designed to respond to the psychological needs of human beings.
“It is a frequent and serious mistake to forget that artificial intelligence is not another human being,” he underlined.