Generative AI

Indonesia at forefront of Asia’s AI hopes and fears


The recent Global Public Opinion on Artificial Intelligence survey (GPO-AI) revealed that 66% of Indonesians are concerned about the misuse of artificial intelligence (AI) compared to a global average of 49%.

Indonesia has a democratic society, a vibrant tech entrepreneur system and widespread social media usage – all of which create vulnerabilities when using AI

However, Indonesia has the regulatory tools to mitigate risks, while taking advantage of opportunities, if policymakers, industry, and civil society work together creatively to address the public’s concerns.

Specifically, policymakers have been keen to address AI. Last year, the Ministry of Communications and Information (Kominfo) published Circular no. 9 in 2023 on the ethical use of AI, and legislation may be on the way following the launch of an AI readiness assessment with UNESCO. Indonesia also joined other ASEAN member states this year in supporting an ASEAN Guide on AI Governance.

Indonesia should leverage all the regulatory tools at its disposal to address AI now – and make these respective tools more robust – rather than wait until a stand-alone law is debated, passed and resourced.  

Indonesia recently passed the Personal Data Privacy Law (PDP Law) in 2022. While the rules and institutions around protecting privacy are new in Indonesia, opportunities to make these institutions more robust to address emerging AI-related issues can be seized by identifying global best practices and implementing them early.

This will be no easy task, as the basic infrastructure of the rules themselves will need to be implemented and problems addressed, as some academics have warned.

For example, countries with active privacy regulators were the first to address generative AI companies. Italy’s data privacy authority put an injunction on OpenAI’s popular ChatGPT in 2023, citing a lack of age verification procedures, and information regarding the processing of personal data in training the AI. 

Around Asia as well, countries with established privacy rules are addressing AI. New Zealand, Australia, Singapore and South Korea have dealt with issues as diverse as automated decision-making, biometric identification, and providing companies using generative AI with guidance to mitigate privacy risks. 

Indonesia would do well to ensure that its nascent privacy rules address AI issues and make their rules interoperable and harmonized within the region and globally. Indonesia’s intellectual property (IP) rules also present an opportunity to help address the public’s concerns about the misuse of AI. 

Guidance should be given to Indonesia’s growing cohort of AI companies about the potential for copyright infringement when using copyrighted data to train AI. Also, authorities can help shape creatives’ reliance on generative AI tools by providing guidance on whether and to what extent content produced by generative AI can receive copyright protection. 

Having robust anticounterfeiting and piracy laws will also help address the public’s concerns about AI. Taking down potentially copyright-infringing works made with AI, or addressing AI-fueled promotion of fake goods online will be useful in building the public’s trust in AI. 

One area of IP that should be addressed in Indonesia, and indeed globally, is the right of publicity, or personality rights – which protect a person’s name, image and voice from commercial misuse. Recent news makes clear the urgency, such as when OpenAI used a voice very similar to the international star Scarlett Johansen’s in a new version of ChatGPT.

Deepfakes present a particular challenge to issues of trust and safety that underpin many rule-of-law and democracy issues. Indonesia was on the forefront of the use of generative AI in elections earlier this year. Some examples are benign, such as a deepfake of a dancing politician, but others are questionable, such as deepfakes resurrecting deceased politicians.

Indonesia’s experience should be studied and recommendations and best practices circulated widely, not only for the benefit of the country but also for other democracies. 

Rules are being proposed to protect democracies from deepfakes across the globe. For example, India’s Election Commission recently circulated guidelines notifying political parties to adhere to existing rules, and not create or spread harmful deepfakes. While in the US, legislators have proposed a bill that would require transparent notification of deepfake video or audio in political advertising.

Beyond election integrity, regulators addressing online safety can look to best practices globally. Recently, the Online Safety Regulators Network – a group of eight independent regulators from around the world – published guides to address human rights issues in online safety, and how to build regulatory coherence for online safety issues, which include deep fakes.

Tapping into these networks will be important not only for government policymakers but also for Indonesia’s civil society involved in digital rights issues. 

Although Indonesians are concerned about the misuse of AI, they also lead the world in optimism about the technology. Stanford’s AI Index Report 2024 reveals that 78% of Indonesians believe AI services and tools have more benefits than risks – the highest of 31 countries surveyed. 

Indonesia can lean on its cautious optimism and experience, and utilize existing rules to create a vibrant and ethical AI ecosystem that can be a model worldwide. 

Seth Hays is an attorney and managing director of APAC GATES, a Taipei-based rights consultancy. He also leads the Digital Governance Asia Center of Excellence – a non-profit organization dedicated to sharing policy best practices across Asia.



Source

Related Articles

Back to top button