Curia and the Generative AI
That last question is a big one, for data privacy and security is a major concern at the Curia, probably the most important concern when it comes to Information Technology (IT). For us, protecting the data that we deal with is paramount – practically all generative AI Assistants, be it from Microsoft, Google, or OpenAI, to name the big three, require that the data being handled be stored and manipulated online, with the associated risk of having that data compromised.
There are, of course, other considerations, e.g., the moral/ethical/fairness implications of using generative AI, the transparency and the accountability of generative AI providers,the bias and discrimination that could arise from AI models, the impact on the environment of AI infrastructures, and so on.
Knowing all this, perhaps we should ask beforehand, “Should we even use generative AI at all for work, especially at the Curia where sensitive information about persons, institutions and events are dealt with on a daily basis?” I, for one, feel that generative AI is a useful tool, an extremely powerful one, but it remains nothing more, or less, than that, a tool. As Fr Casalone pointed out in his presentation to the Curia, we, as creators, have the ultimate responsibility when we use any tool, technological or otherwise, and this responsibility obviously extends to the use of generative AI.
The power of generative AI, with its vast datasets and raw computing power, has the ability to create content that mirrors human creativity. Still, generative AI should support human decision-making, not undermine or replace it, and it should not be used where there is fear that it could cause harm. As the wise once said (in the quotation above), “They must consider that great responsibility follows inseparably from great power”.