January 18, 2024 By Sue Poremba 3 min read

The hottest technology right now is AI — more specifically, generative AI. The trend is so popular that every conference and webinar speaker feels obligated to mention some form of AI, no matter their field.

The innovations and risks that AI offers are both exciting and frightening. However, the heavy focus on this technology overshadows an important component of artificial intelligence: machine learning (ML).

For a quick overview, ML is a subset of AI  based on patterns, predictions and optimization. Cybersecurity tools rely on ML to use predictions and patterns to find anomalies and sniff out potential threats. Instead of a human spending hours reading logs, ML can do the same tasks in seconds.

Like AI, ML has been around for a long time. The reason why we’re talking about AI so much now is because generative AI is a game changer in the way we communicate with technology. But ML is also changing, and we’ll see it used in new ways in 2024.

How we use machine learning

Machine learning is all about data. ML algorithms rely on historical data to detect patterns, from software codes to customer shopping behaviors. Social media networks rely on ML algorithms to keep relevant information at the top of your feed. Self-driving cars use ML algorithms to navigate city streets and traffic laws. In cybersecurity, ML is used in areas like behavioral analytics and sending alerts for unusual usage, task automation and providing more efficient real-time threat-hunting intelligence.

There are three common types of ML currently used. Supervised learning trains ML to perform a specific task based on the presented data. Unsupervised learning relies on relationships across the data. Reinforcement learning is most similar to human learning, where the ML model learns to solve problems through trial-and-error formats.

New trends in machine learning

As AI continues to advance, so does ML, and one of the most anticipated improvements to ML in 2024 will be no-code machine learning. No-code ML relies heavily on behavioral data and plain English to get results. Instead of complicated coding language, analysts will be able to ask a question or create a command to get a report. One of the biggest benefits of no-code ML is that it allows companies of all sizes to implement ML and AI into their networks without the need to hire data analysts and engineers. The downfall is that this type of ML technology is limited and won’t be able to do deep-dive predictive analysis.

Unsupervised and reinforcement ML are both expected to expand in the coming year, in part because of no-code ML.

As ML evolves, we will likely see growth in other technologies like augmented reality and quantum computing. “Machine learning models can generate 3D objects for apps and other uses in augmented reality,” Luís Fernando Torres wrote. In addition, ML will play a role in improved facial recognition technology and interactions with generative AI.

Machine learning and security — the good and bad

As mentioned earlier, ML benefits your overall cybersecurity program by automating what were once cumbersome manual tasks. It can find threats that are otherwise missed and can cut down on false positives.

But as with any technology, there are security risks involved. Threat actors use ML and AI to launch attacks by poisoning or misleading the data to trick the system into providing false reports. Threat actors use this to bypass security systems and hijack the network.

AI and its role in security is what everyone wants to talk about today, but the ways that AI can improve your company’s security systems depend on machine learning. The time has come to get back to the basics and recognize how ML fits into your security system and how to best train ML so that your AI is even more effective.

More from Artificial Intelligence

Generative AI security requires a solid framework

4 min read - How many companies intentionally refuse to use AI to get their work done faster and more efficiently? Probably none: the advantages of AI are too great to deny.The benefits AI models offer to organizations are undeniable, especially for optimizing critical operations and outputs. However, generative AI also comes with risk. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years.CISA Director Jen…

Self-replicating Morris II worm targets AI email assistants

4 min read - The proliferation of generative artificial intelligence (gen AI) email assistants such as OpenAI’s GPT-3 and Google’s Smart Compose has revolutionized communication workflows. Unfortunately, it has also introduced novel attack vectors for cyber criminals. Leveraging recent advancements in AI and natural language processing, malicious actors can exploit vulnerabilities in gen AI systems to orchestrate sophisticated cyberattacks with far-reaching consequences. Recent studies have uncovered the insidious capabilities of self-replicating malware, exemplified by the “Morris II” strain created by researchers. How the Morris…

Open source, open risks: The growing dangers of unregulated generative AI

3 min read - While mainstream generative AI models have built-in safety barriers, open-source alternatives have no such restrictions. Here’s what that means for cyber crime.There’s little doubt that open-source is the future of software. According to the 2024 State of Open Source Report, over two-thirds of businesses increased their use of open-source software in the last year.Generative AI is no exception. The number of developers contributing to open-source projects on GitHub and other platforms is soaring. Organizations are investing billions in generative AI…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today