April 25, 2024 By Josh Nadeau 4 min read

Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.

However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success of this new government-mandated project will depend on NIST’s ability to overcome these unique challenges while relying on strong partnerships and new business funding initiatives.

The growing concern about AI-powered cyber threats

AI’s notable entry into the business world and our personal lives has been met with great positivity regarding its potential application. Now, more and more organizations are adopting technology to inject new levels of efficiency and automation into their operations.

However, a much darker side of this new disruptive technology has continued to escalate in severity over the years. AI technology has become a core component of many modern cybersecurity threats, allowing for highly adaptable and effective methods for orchestrating attacks.

The introduction of newer technologies like AI into cyber criminals’ arsenal has led to statistics projecting that cyberattack damages will be expected to reach $10.5 trillion annually by 2025. Much of this growing trend can be attributed to the sheer scale of AI-driven attacks now taking place.

Unlike traditional attack methods that heavily relied on human intervention when planning and executing various attack vectors, AI technology has allowed cyber criminals to operate on a much more autonomous and anonymous scale. This includes deploying automated vulnerability discovery software that leads to quicker development of zero-day exploits and successful malware injections.

Deepfake threats are another product of AI technology that can have significant political and economic consequences. Manipulated audio and video recordings that are becoming more believable every day can lead to several security issues and can even play a role in steering political elections or compromising critical infrastructure.

These growing concerns have given governments a higher priority in prioritizing new initiatives focused on placing more control over how AI technology is used and regulated.

Learn more about AI cybersecurity

NIST’s tall order and what it entails

NIST, originally founded in 1901 as the National Bureau of Standards, has operated for over a century in the U.S. Department of Commerce with a mission to help promote higher standards when using science and technology to improve the security and quality of life for everyone.

In an effort to continuously improve on these initiatives, the Biden administration announced in June of last year that NIST would focus its efforts on a new project building off of the NIST AI Risk Management Framework to help address and regulate the rapid growth of AI technology.

As part of the new project, NIST will be extending its investigatory scope past just cybersecurity and address the global risks associated with the misuse of AI technology in society. This includes designing highly complex testing protocols to ensure the use of this new technology is both ethical and maintains the right level of security to avoid being misused.

One of the main subjects that NIST will be focusing on over the coming months is the rise of generative AI due to its fast adoption rate in business environments around the world. In support of this effort, NIST will be working with other organizations to develop new standards and best practices for the responsible development and use of it in commercial settings.

What challenges is NIST facing in the fight against AI-driven cyberattacks?

Although NIST has been viewed by many as playing a critical role in ensuring better security practices for all industries and sectors, the path in front of them hasn’t been easy to walk. NIST is currently facing major financial issues impacting its ability to see its mission through.

For several years now, the 123-year-old government building that houses NIST’s R&D department has been in a state of disrepair, with rain leaks and mold becoming a major issue. Budget constraints have been the main culprit associated with these issues and new suggested government plans would show a further 10% reduction in the organization’s funding.

Considering the ambitious plan instituted by the Biden administration, the future of this initiative looks like it could be in jeopardy without certain forms of intervention. Insufficient funding will restrict the scope and scale that NIST can undertake and may lead to delays when introducing new essential security tools and guidelines.

With AI technology increasing its momentum, NIST is at a crossroads when it comes to finding the necessary support it will need to keep pace with the growing threat of highly advanced security threats.

NIST looking forward

Addressing NIST’s funding issue is one of the most important challenges the organization is facing right now.

Increased financial support — either through federal funding initiatives or through public and private partnerships — can play a big part in opening the doors to bringing in more qualified talent to assist. This includes the ability to work with top security researchers and engineers who can help accelerate the discovery and mitigation of specific AI risks.

In addition to receiving funding, NIST will benefit greatly by joining collaborative forces with other industry leaders in the security and technology sector. Many organizations like Google and Amazon have been leaders in AI technology adoption and have funded their own security initiatives surrounding its use.

While the long-term success of NIST’s new mission will no doubt depend on improving its current funding situation, we may start to see some significant improvements in how AI technology is safely used across various industries as more organizations recognize the importance of NIST’s work and lend their support.

More from Artificial Intelligence

Generative AI security requires a solid framework

4 min read - How many companies intentionally refuse to use AI to get their work done faster and more efficiently? Probably none: the advantages of AI are too great to deny.The benefits AI models offer to organizations are undeniable, especially for optimizing critical operations and outputs. However, generative AI also comes with risk. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years.CISA Director Jen…

Self-replicating Morris II worm targets AI email assistants

4 min read - The proliferation of generative artificial intelligence (gen AI) email assistants such as OpenAI’s GPT-3 and Google’s Smart Compose has revolutionized communication workflows. Unfortunately, it has also introduced novel attack vectors for cyber criminals. Leveraging recent advancements in AI and natural language processing, malicious actors can exploit vulnerabilities in gen AI systems to orchestrate sophisticated cyberattacks with far-reaching consequences. Recent studies have uncovered the insidious capabilities of self-replicating malware, exemplified by the “Morris II” strain created by researchers. How the Morris…

Open source, open risks: The growing dangers of unregulated generative AI

3 min read - While mainstream generative AI models have built-in safety barriers, open-source alternatives have no such restrictions. Here’s what that means for cyber crime.There’s little doubt that open-source is the future of software. According to the 2024 State of Open Source Report, over two-thirds of businesses increased their use of open-source software in the last year.Generative AI is no exception. The number of developers contributing to open-source projects on GitHub and other platforms is soaring. Organizations are investing billions in generative AI…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today