Generative AI: security risks and lack of regulations
OpenAI’s release of ChatGPT seems like yesterday, but so much has happened since its debut that it’s hard to remember a time before this advanced AI chatbot entered the public consciousness. During its first year, ChatGPT raised as many questions about generative AI as it answered. Two of the biggest issues are security risks and the notion that the use and development of generative AI use can be controlled or regulated.
Security Risks
Take the known security risks. ChatGPT often generates incorrect yet believable information that can lead to dangerous consequences when its advice is blindly followed. Cybercriminals use it to improve and scale up their attacks. Many believe that the greatest risk is its further development, or rather the next-generation of generative AI.
With all of generative AI’s advancements over the past year, the security risks are becoming more complex. This is evident when you consider both its plausibility and its perniciousness.
Cryptographic and cybersecurity advisor, Keeper Security.
As skeptical and aware people, we are good at recognizing when something feels off. ChatGPT can make it feel on. Generative AI solutions are designed to generate responses that sound and look extremely plausible, but they don’t have to be accurate. They are trained on internet data, and as we know, one shouldn’t believe everything one reads online. Experts can identify the obviously-wrong answers, but non-experts won’t know the difference. In this regard, generative AI can be weaponized to overwhelm the world with false information. This is a fundamental issue with any generative AI solution.
More issues stem from the criminal use of these tools. For example, cybercriminals are using ChatGPT to generate very realistic phishing emails and URLs for spoofed websites. We have trained ourselves to spot phishing emails and detect subtleties that raise the red flag when something is off, like the email from a far-away prince asking for money. Now, however, that email may appear to come from your family, friend or colleague. ChatGPT and other generative AI solutions can create emails and URLs for phishing purposes that look and feel plausible, and cybercriminals are taking advantage of this highly pernicious capability.
What’s more, generative AI solutions can generate variants very quickly (without costs) to circumvent spam detectors. By weaponizing generative AI in this manner, malicious actors can rapidly scale up their attacks.
Generative AI will likely give password cracking attackers a leg up. Academics are just beginning to study the impacts of generative AI on password cracking, but it’s very likely that bad actors are doing so as well. Initial studies use inputs like previously leaked password databases to generate typical passwords and mimic the patterns that humans use when crafting their own passwords.
However, future AI-enhanced tools might be based on more context: who the target is, where they live, what languages they speak, their interests or pop-cultural influences. This context might ramp up effectiveness. If the attacker is allowed to submit an unlimited number of guesses, then it doesn’t matter if the AI is wrong 10,000 times–the AI only has to guess correctly once. This will rapidly increase account compromise for those who do not use strong, unique and random passwords, such as by using one of the best password generators, or best password managers, to protect their accounts.
Privacy implications
The sharing of sensitive data is another security risk associated with generative AI tools. ChatGPT is available to anyone and everyone– even those with no understanding of cybersecurity or best practices. Users reveal information they think is kept private, but they don’t realize that all of their details are stored in a database and used to train future iterations of AI.
Even if the company that produces the generative AI solution is not leveraging user data for training, they may be recording it for quality control. That creates yet another copy of sensitive data susceptible to attack if an attacker should gain access to these transcripts. Cybercriminals sell sensitive information like this on the dark web or use the information to target their victims.
Why AI won’t comply
Despite the risks of generative AI, since ChatGPT’s debut it’s been a race to innovate, with the biggest names in tech in the running.
After OpenAI released GPT-4 in March 2023, the Future of Life Institute circulated a petition calling for a six-month moratorium on large scale experiments with AI. The moratorium would give AI labs and independent experts the time to develop safety protocols for advanced AI design that makes AI systems more accurate, safer, interpretable and trustworthy.
Is it possible to legislate the use and development of generative AI? No. Will the industry collaborate enough to create guardrails that can be universally followed? Possibly.
Generative AI has made regulating, censoring and legislating technology more complex than ever. Jurisdictions make moratoriums on research ineffective and even exacerbate problems. For example, imagine what would happen if the U.S. mandated a temporary pause on generative AI research. Researchers outside the U.S. do not have to comply. Instead of protecting against the presumed dangerous pace of AI innovation, the moratorium would create unfair Research and Development (R&D) advantages for the nations and researchers outside U.S. jurisdiction.
Any powerful technology can be wielded for good or evil. ChatGPT and its counterparts are no different. If it’s powerful, it also has a dark side. But should this lead to restricting further R&D? No. In fact, such thinking only restricts the very research that’s needed to counter the effects of dark side development and deployments. Suppose a jurisdiction temporarily bans research on the Large Language Models (LLMs) on which ChatGPT is built. Clever researchers could simply adjust their naming and report they are not researching LLMs but instead working on Medium Lift Neural Network Weighted Modules, which are not currently regulated. Beyond that, any moratorium disproportionately favors those using AI for nefarious purposes. Remember that criminals don’t follow congressional regulations.
Summary
More than one year in, ChatGPT is a reminder that change is the only constant in our universe. Even as the world grapples with these questions around security risks and the rapid pace of innovation, we know that another mind-boggling and ethically challenging advancement is likely around the corner.
We’ve listed the best cloud antivirus.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro