Generative AI regulation: identifying a need for consumer protection
In the technology ecosystem, the emergence of generative AI (Gen AI) has sparked both excitement and apprehension. While its potential to revolutionize various industries is undeniable, concerns about its ethical implications and potential misuse have also come to the forefront.
The calls for regulation of the technology have been heard around the world. In the UK specifically, the approach of balancing innovation has been taken, with the Government expecting responses from various industry regulators at the end of the month detailing their approach to AI.
A recent KPMG UK poll highlighted the desire for Gen AI regulation by consumers, with a significant majority of respondents expressing a desire for swift action. But the survey also found that there are other ways to make consumers feel safer about using the technology and a sense of optimism about the future of generative AI.
Head of Technology at KPMG UK.
Overwhelming support for regulation
Gen AI became popular with consumers in 2022, when several user-friendly text-based AI systems were released to the public. Despite its infancy, consumers express an urgency for laws around the technology. The poll revealed that people are eager for laws around generative AI (Gen AI) technology, with over half (53%) saying that regulation should be introduced as soon as possible, and almost everyone (96%) thinking that Gen AI regulation is either very or somewhat important.
Some are happy to take a more measured approach; another 18% would like to see the implementation of Gen AI regulations over the next six months, while 14% would like it to happen within a year. However, with new laws, such as the EU AI Act, not actually becoming applicable for at least another twelve months, they will have to wait.
Safety concerns
A significant portion of respondents, nearly a quarter (24%), expressed reservations about the safety of using generative AI (Gen AI). An additional 4% stated that they do not feel safe using it at all. However, when asked about measures that would enhance their sense of protection, over half (53%) of the participants emphasized the importance of increased regulation.
Education from Gen AI creators also emerged as a key factor in fostering a sense of safety among users. This could include the use of comprehensive user guides that clearly explain the capabilities and limitations of the tools. In addition, in-product warnings and alerts within the Gen AI tool itself could also be an effective means of informing users about potential risks or ethical considerations associated with specific actions or inputs.
Despite these concerns, a notable 13% of respondents advocated for the complete removal of Gen AI due to their apprehensions about its potential risks. This underscores the need for Gen AI creators to take proactive steps in educating users about safe and responsible usage, building trust, and addressing safety concerns to ensure the widespread adoption of this transformative technology.
Primary safety issues
When considering the potential impact of generative AI (Gen AI), respondents expressed several key concerns. The foremost worry was its potential use for criminal intent, with 52% of respondents citing this as a major concern. Closely following this was the misappropriation of information entered into Gen AI models, such as personal photos, health data, and personal details, which raised concerns for 51% of respondents.
Furthermore, with over 2 billion voters expected to participate in elections across 50 countries this year, nearly half (46%) of people expressed anxiety about the spread of misinformation and disinformation through Gen AI. The use of this technology to create deepfakes of politicians, as evidenced by incidents involving London Mayor Sadiq Khan and Labour leader Keir Starmer, underscores the validity of these concerns.
Balancing the risks and benefits
Despite concerns about generative AI, it is pleasing to observe that many consumers are aware of the value this technology can bring, as over a third (37%) of respondents said that the benefits of its use outweigh the risks. Furthermore, over three quarters (78%) were either very or somewhat optimistic about the impact of generative AI on society.
It is important to dispel any fears that people may have about generative AI in order to encourage its uptake. As long as people continue to think that AI will take their jobs, or that there is risk of bias in its decision making, etc., encouraging people to adopt and get the most from this technology will be an impossible task. We need to talk more about the opportunities ensure that AI isn’t seen as a remote piece of black box technology competing with or replacing humans – but re-frame it as ‘your new AI colleague’ that can support people and help them achieve their goals.
This new data only underscores the urgent need for Gen AI regulation to address public concerns and ensure responsible development and deployment of this transformative technology. Fortunately, Governments and regulators around the world, including the UK, are working to put appropriate guardrails in place for Gen AI so people can benefit from this new technology. But our data suggests that people want the companies creating these models to do more to teach the public about how to use them safely.
Generative AI businesses should take this as an opportunity to not only build trust with their customers but also help them get better results when using their tools. As Gen AI continues to shape our world, collaboration between governments, regulators, and Gen AI businesses is crucial to unlocking its benefits while mitigating potential risks.
We’ve featured the best CX tool.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro