Generative AI

S’pore launches new governance framework for generative AI


SINGAPORE – Transparency about where and how content is generated is crucial in the global fight against misinformation, which has been exacerbated by generative artificial intelligence (gen AI), and is one of nine areas highlighted in an ethical framework for gen AI launched on May 30.

Launched by Deputy Prime Minister Heng Swee Keat on May 30, the Model Governance Framework for Generative AI aims to address concerns over the nascent technology, which has taken the world by storm since late 2022 because of its ability to quickly create realistic content.

“Good governance is crucial. With the right guard rails in place, we create conditions to innovate safely, responsibly and for a common purpose,” said DPM Heng, who was speaking during the opening ceremony of the fourth annual Asia Tech x Singapore event held at Capella Singapore on Sentosa.

“The borderless nature of tech also means this must be a shared endeavour,” he added.

Developed by the AI Verify Foundation and Infocomm Media Development Authority (IMDA), the framework identifies nine areas – including accountability, trusted data for AI training and content provenance – where governance of gen AI can be strengthened.

The framework – developed in consultation with some 70 organisations ranging from tech giants Microsoft and Google to government agencies such as the US Department of Commerce – also balances the need to facilitate innovation.

In the area of trusted data, for example, the framework calls on policymakers to elaborate how existing personal data laws apply to gen AI, which often draws on large amounts of data.

It also suggests that governments could work with communities to curate a repository of training data sets relevant to their specific contexts, such as those in “low resource languages”. This refers to languages that are not well represented online, which would make gen AI accessible to a greater number of people.

The framework also identifies content provenance as an area of concern, pointing to the increasing difficulty people face in identifying AI-generated content, due to the technology’s ability to rapidly create realistic content.

It points to the need for regulators to work with publishers on incorporating technical solutions such as digital watermarking and cryptographic provenance – which can track and verify the origin of digital content – to flag content created or modified by AI.

The new framework builds on an existing one, which was originally published in 2019 and covers only traditional AI.

The two differ in that while traditional AI can typically only analyse given data, gen AI is able to draw on vast amounts of data to generate original content.

The new framework builds on policy ideas highlighted in IMDA’s 2023 discussion paper on gen AI, and also draws on international feedback from discussions with researchers and AI organisations.

It will also be aligned to international AI principles, such as the Hiroshima AI Process announced during the 2023 Group of Seven summit, which calls for the development of interoperable global standards of AI governance frameworks.



Source

Related Articles

Back to top button