While mainstream generative AI models have built-in safety barriers, open-source alternatives have no such restrictions. Here’s what that means for cyber crime.

There’s little doubt that open-source is the future of software. According to the 2024 State of Open Source Report, over two-thirds of businesses increased their use of open-source software in the last year.

Generative AI is no exception. The number of developers contributing to open-source projects on GitHub and other platforms is soaring. Organizations are investing billions in generative AI across a vast range of use cases, from customer service chatbots to code generation. Many of them are either building proprietary AI models from the ground up or on the back of open-source projects.

But legitimate businesses aren’t the only ones investing in generative AI. It’s also a veritable goldmine for malicious actors, from rogue states bent on proliferating misinformation among their rivals to cyber criminals developing malicious code or targeted phishing scams.

Tearing down the guard rails

For now, one of the few things holding malicious actors back is the guardrails developers put in place to protect their AI models against misuse. ChatGPT won’t knowingly generate a phishing email, and Midjourney won’t create abusive images. However, these models belong to entirely closed-source ecosystems, where the developers behind them have the power to dictate what they can and cannot be used for.

It took just two months from its public release for ChatGPT to reach 100 million users. Since then, countless thousands of users have tried to break through its guardrails and ‘jailbreak’ it to do whatever they want — with varying degrees of success.

The unstoppable rise of open-source models will render these guardrails obsolete anyway. While performance has typically lagged behind that of closed-source models, there’s no doubt open-source models will improve. The reason is simple — developers can use whichever data they like to train them. On the positive side, this can promote transparency and competition while supporting the democratization of AI — instead of leaving it solely in the hands of big corporations and regulators.

However, without safeguards, generative AI is the next frontier in cyber crime. Rogue AIs like FraudGPT and WormGPT are widely available on dark web markets. Both are based on the open-source large language model (LLM) GPT-J developed by EleutherAI in 2021.

Malicious actors are also using open-source image synthesizers like Stable Diffusion to build specialized models capable of generating abusive content. AI-generated video content is just around the corner. Its capabilities are currently limited only by the availability of high-performance open-source models and the considerable computing power required to run them.

What does this mean for businesses?

It might be tempting to dismiss these issues as external threats that any sufficiently trained team should be adequately equipped to handle. But as more organizations invest in building proprietary generative AI models, they also risk expanding their internal attack surfaces.

One of the biggest sources of threat in model development is the training process itself. For example, if there’s any confidential, copyrighted or incorrect data in the training data set, it might resurface later on in response to a prompt. This could be due to an oversight on the part of the development team or due to a deliberate data poisoning attack by a malicious actor.

Prompt injection attacks are another source of risk, which involves tricking or jailbreaking a model into generating content that goes against the vendor’s terms of use. That’s a risk facing every generative AI model, but the risks are arguably greater in open-source environments lacking sufficient oversight. Once AI tools are open-sourced, the organizations they originate from lose control over the development and use of the technology.

The easiest way to understand the threats posed by unregulated AI is to ask the closed-source ones to misbehave. Under most circumstances, they’ll refuse to cooperate, but as numerous cases have demonstrated, all it typically takes is some creative prompting and trial and error. However, you won’t run into any such restrictions with open-source AI systems developed by organizations like Stability AI, EleutherAI or Hugging Face — or, for that matter, a proprietary system you’re building in-house.

A threat and a vital tool

Ultimately, the threat of open-source AI models lies in just how open they are to misuse. While advancing democratization in model development is itself a noble goal, the threat is only going to evolve and grow and businesses can’t expect to count on regulators to keep up. That’s why AI itself has also become a vital tool in the cybersecurity professional’s arsenal. To understand why, read our guide on AI cybersecurity.

More from Artificial Intelligence

Navigating the ethics of AI in cybersecurity

4 min read - Even if we’re not always consciously aware of it, artificial intelligence is now all around us. We’re already used to personalized recommendation systems in e-commerce, customer service chatbots powered by conversational AI and a whole lot more. In the realm of information security, we’ve already been relying on AI-powered spam filters for years to protect us from malicious emails.Those are all well-established use cases. However, since the meteoric rise of generative AI in the last few years, machines have become…

Risk, reward and reality: Has enterprise perception of the public cloud changed?

4 min read - Public clouds now form the bulk of enterprise IT environments. According to 2024 Statista data, 73% of enterprises use a hybrid cloud model, 14% use multiple public clouds and 10% use a single public cloud solution. Multiple and single private clouds make up the remaining 3%.With enterprises historically reticent to adopt public clouds, adoption data seems to indicate a shift in perception. Perhaps enterprise efforts have finally moved away from reducing risk to prioritizing the potential rewards of public cloud…

Is AI saving jobs… or taking them?

4 min read - Artificial intelligence (AI) is coming to take your cybersecurity job. Or, AI will save your job. Well, which is it? As with all things security-related, AI-related and employment-related, it's complicated. How AI creates jobs A major reason it's complicated is that AI is helping to increase the demand for cybersecurity professionals in two broad ways. First, malicious actors use AI to get past security defenses and raise the overall risk of data breaches. The bad guys can increasingly use AI-based…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today