June 14, 2024 By Jonathan Reed 4 min read

How many companies intentionally refuse to use AI to get their work done faster and more efficiently? Probably none: the advantages of AI are too great to deny.

The benefits AI models offer to organizations are undeniable, especially for optimizing critical operations and outputs. However, generative AI also comes with risk. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years.

CISA Director Jen Easterly said, “We don’t have a cyber problem, we have a technology and culture problem. Because at the end of the day, we have allowed speed to market and features to really put safety and security in the backseat.” And no place in technology reveals the obsession with speed to market more than generative AI.

AI training sets ingest massive amounts of valuable and sensitive data, which makes AI models a juicy attack target. Organizations cannot afford to bring unsecured AI into their environments, but they can’t do without the technology either.

To bridge the gap between the need for AI and its inherent risks, it’s imperative to establish a solid framework to direct AI security and model use. To help meet this need, IBM recently announced its Framework for Securing Generative AI. Let’s see how a well-developed framework can help you establish solid AI cybersecurity.

Securing the AI pipeline

A generative AI framework should be designed to help customers, partners and organizations to understand the likeliest attacks on AI. From there, defensive approaches can be prioritized to quickly secure generative AI initiatives.

Securing the AI pipeline involves five areas of action:

  1. Securing the data: How data is collected and handled
  2. Securing the model: AI model development and training
  3. Securing the usage: AI model inference and live use
  4. Securing AI model infrastructure
  5. Establishing sound AI governance

Now, let’s see how each area is oriented to address AI security threats.

Learn more about AI cybersecurity

1. Secure the AI data

Hungry AI models consume massive amounts of data, which data scientists, engineers and developers will access for development purposes. However, developers might not have security high on their list of priorities. If mishandled, your sensitive data and critical intellectual property (IP) could end up exposed.

In AI model attacks, exfiltration of underlying data sets is likely to be one of the most common attack scenarios. Therefore, security fundamentals are the first line of defense to protect these data sets. AI security fundamentals include:

2. Secure the AI model

When developing AI applications, data scientists frequently use pre-existing, freely available machine learning (ML) models sourced from online repositories. However, like any open-source library, security is frequently not built in.

Every organization must consider the AI security risks versus the benefits of accelerated model development. However, without proper AI model security, the downside risk can be significant. Remember, hackers have access to online repositories as well, and backdoors or malware can be injected into open-source models. Any organization that downloads an infected model is wide open to attack.

Furthermore, API-enabled large language models (LLMs) present a similar risk. Hackers can target API interfaces to access and exploit data being transported across the APIs. And LLM agents or plug-ins with excessive permissions further increase the risk for compromise.

To secure AI models, organizations should:

3. Secure the AI usage

When AI models first became widely available, waves of users rushed to test the platforms. It wasn’t long before hackers were able to trick the models into ignoring guardrails and generate biased, false or even dangerous responses. All this can lead to reputational damage and increase the risk of costly legal headaches.

Attackers can also attempt to analyze input/output pairs and train a surrogate model to mimic the behavior of your organization’s AI model. This means the enterprise can lose its competitive edge. Finally, AI models are also vulnerable to denial of service attacks, where attackers overwhelm the LLM with inputs that degrade the quality of service and ramp up resource use.

Best practices for AI model usage security include:

  • Monitoring for prompt injections
  • Monitoring for outputs containing sensitive data or inappropriate content
  • Detecting and responding to data poisoning, model evasion and model extraction
  • Deploying machine learning detection and response (MLDR), which can be integrated into security operations solutions, such as IBM Security® QRadar®, enabling the ability to deny access and quarantine or disconnect compromised models.

4. Secure the infrastructure

A secure infrastructure must underpin any solid AI cybersecurity strategy. Strengthening network security, refining access control, implementing robust data encryption and deploying vigilant intrusion detection and prevention systems around AI environments are all critical for securing infrastructure that supports AI. Additionally, allocating resources towards innovative security solutions tailored for safeguarding AI assets should be a priority.

5. Establish AI governance

Artificial intelligence governance entails the guardrails that ensure AI tools and systems are and remain safe and ethical. It establishes the frameworks, rules and standards that direct AI research, development and application to ensure safety, fairness and respect for human rights.

IBM is an industry leader in AI governance, as shown by its presentation of the IBM Framework for Securing Generative AI. As entities continue to give AI more business process and decision-making responsibility, AI model behavior must be kept in check, monitoring for fairness, bias and drift over time. Whether induced or not, a model that diverges from what it was originally designed to do can introduce significant risk.

More from Artificial Intelligence

Navigating the ethics of AI in cybersecurity

4 min read - Even if we’re not always consciously aware of it, artificial intelligence is now all around us. We’re already used to personalized recommendation systems in e-commerce, customer service chatbots powered by conversational AI and a whole lot more. In the realm of information security, we’ve already been relying on AI-powered spam filters for years to protect us from malicious emails.Those are all well-established use cases. However, since the meteoric rise of generative AI in the last few years, machines have become…

Risk, reward and reality: Has enterprise perception of the public cloud changed?

4 min read - Public clouds now form the bulk of enterprise IT environments. According to 2024 Statista data, 73% of enterprises use a hybrid cloud model, 14% use multiple public clouds and 10% use a single public cloud solution. Multiple and single private clouds make up the remaining 3%.With enterprises historically reticent to adopt public clouds, adoption data seems to indicate a shift in perception. Perhaps enterprise efforts have finally moved away from reducing risk to prioritizing the potential rewards of public cloud…

Is AI saving jobs… or taking them?

4 min read - Artificial intelligence (AI) is coming to take your cybersecurity job. Or, AI will save your job. Well, which is it? As with all things security-related, AI-related and employment-related, it's complicated. How AI creates jobs A major reason it's complicated is that AI is helping to increase the demand for cybersecurity professionals in two broad ways. First, malicious actors use AI to get past security defenses and raise the overall risk of data breaches. The bad guys can increasingly use AI-based…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today