Generative AI

The growing dangers of unregulated generative AI


While mainstream generative AI models have built-in safety barriers, open-source alternatives have no such restrictions. Here’s what that means for cyber crime.

There’s little doubt that open-source is the future of software. According to the 2024 State of Open Source Report, over two-thirds of businesses increased their use of open-source software in the last year.

Generative AI is no exception. The number of developers contributing to open-source projects on GitHub and other platforms is soaring. Organizations are investing billions in generative AI across a vast range of use cases, from customer service chatbots to code generation. Many of them are either building proprietary AI models from the ground up or on the back of open-source projects.

But legitimate businesses aren’t the only ones investing in generative AI. It’s also a veritable goldmine for malicious actors, from rogue states bent on proliferating misinformation among their rivals to cyber criminals developing malicious code or targeted phishing scams.

Tearing down the guard rails

For now, one of the few things holding malicious actors back is the guardrails developers put in place to protect their AI models against misuse. ChatGPT won’t knowingly generate a phishing email, and Midjourney won’t create abusive images. However, these models belong to entirely closed-source ecosystems, where the developers behind them have the power to dictate what they can and cannot be used for.

It took just two months from its public release for ChatGPT to reach 100 million users. Since then, countless thousands of users have tried to break through its guardrails and ‘jailbreak’ it to do whatever they want — with varying degrees of success.

The unstoppable rise of open-source models will render these guardrails obsolete anyway. While performance has typically lagged behind that of closed-source models, there’s no doubt open-source models will improve. The reason is simple — developers can use whichever data they like to train them. On the positive side, this can promote transparency and competition while supporting the democratization of AI — instead of leaving it solely in the hands of big corporations and regulators.

However, without safeguards, generative AI is the next frontier in cyber crime. Rogue AIs like FraudGPT and WormGPT are widely available on dark web markets. Both are based on the open-source large language model (LLM) GPT-J developed by EleutherAI in 2021.

Malicious actors are also using open-source image synthesizers like Stable Diffusion to build specialized models capable of generating abusive content. AI-generated video content is just around the corner. Its capabilities are currently limited only by the availability of high-performance open-source models and the considerable computing power required to run them.

What does this mean for businesses?

It might be tempting to dismiss these issues as external threats that any sufficiently trained team should be adequately equipped to handle. But as more organizations invest in building proprietary generative AI models, they also risk expanding their internal attack surfaces.

One of the biggest sources of threat in model development is the training process itself. For example, if there’s any confidential, copyrighted or incorrect data in the training data set, it might resurface later on in response to a prompt. This could be due to an oversight on the part of the development team or due to a deliberate data poisoning attack by a malicious actor.

Prompt injection attacks are another source of risk, which involves tricking or jailbreaking a model into generating content that goes against the vendor’s terms of use. That’s a risk facing every generative AI model, but the risks are arguably greater in open-source environments lacking sufficient oversight. Once AI tools are open-sourced, the organizations they originate from lose control over the development and use of the technology.

The easiest way to understand the threats posed by unregulated AI is to ask the closed-source ones to misbehave. Under most circumstances, they’ll refuse to cooperate, but as numerous cases have demonstrated, all it typically takes is some creative prompting and trial and error. However, you won’t run into any such restrictions with open-source AI systems developed by organizations like Stability AI, EleutherAI or Hugging Face — or, for that matter, a proprietary system you’re building in-house.

A threat and a vital tool

Ultimately, the threat of open-source AI models lies in just how open they are to misuse. While advancing democratization in model development is itself a noble goal, the threat is only going to evolve and grow and businesses can’t expect to count on regulators to keep up. That’s why AI itself has also become a vital tool in the cybersecurity professional’s arsenal. To understand why, read our guide on AI cybersecurity.



Source

Related Articles

Back to top button