May 23, 2024 By Mark Stone 4 min read

For the enterprise, there’s no escape from deploying AI in some form. Careers focused on AI are proliferating, but one you may not be familiar with is AI security researcher. These AI specialists are cybersecurity professionals who focus on the unique vulnerabilities and threats that arise from the use of AI and machine learning (ML) systems. Their responsibilities vary, but key roles include identifying and analyzing potential security flaws in AI models and developing and testing methods malicious actors could use to manipulate or deceive AI systems.

In this exclusive Q&A, we spoke with Madhu Shashanka, cofounder of Concentric AI, about his background and experience.

Did you go to college? What did you go to school for? If not, what certifications did you obtain?

I completed a bachelor’s degree in computer science from BITS Pilani, India, and then got a Ph.D. in computational neuroscience from Boston University.

What was your first role in IT? If it wasn’t in security, what pushed you to pursue security?

My first job was with the candy and pet food company Mars, doing analytics. I joined a team called “Catalyst,” which functioned as an internal think tank. There were members from a variety of disciplines and backgrounds looking at innovation and areas of strategic interest to the company in the medium-to-long term. I was doing an internship at Mitsubishi’s North American research labs as part of my Ph.D. thesis work and was introduced to members of the Catalyst team who had come down to explore some new innovations from Mitsubishi.

After Mars, I followed my manager to the corporate research labs of United Technologies (now Raytheon) and had a wonderful experience working with a variety of business units across several application domains (aerospace, HVAC, material science, etc.).

I then moved to a Bay Area dating startup to try my hand at entrepreneurship, and that’s where I started my security journey. Undesirable behavior and security issues were becoming a real challenge at the rapidly growing dating site, and we had to solve several interesting problems. I then joined an early-stage user behavior analysis startup as a cofounder, and except for a brief detour at Charles Schwab building an AI and ML Center of Excellence, I have been doing security since.

How does an AI security researcher differ from other AI research roles?

Let me take the term “AI security researcher” to mean the broad category of people trying to apply AI towards solving cybersecurity problems.

Applied AI researchers typically gain expertise and experience working with particular kinds of datasets characteristic to the domains they work in. This is because as they understand the domain better with time, they gain better intuition for the kinds of data they deal with. As an example, researchers working with video data have developed specialized skills and tools that are very effective for working with video. Another example is online retail, where people have come up with tools and techniques uniquely suited for analyzing and working with online transaction datasets.

But cybersecurity is more than just an application domain. It is not just about working with malware data or analyzing network logs. It is a critical enterprise function that has to do whatever needs to be done to keep the enterprise digital assets secure. I have written in the past about how cybersecurity can be particularly humbling for AI experts new to the field.

A couple of the factors that I think play a more important role in cybersecurity applications:

Focus on risk: Doing AI in cybersecurity is not about building the best AI model or finding the best algorithm to solve a particular problem. The eventual end goal is to minimize risk, and one has to consider all pieces of the puzzle that can affect it. Instead of building a tool in isolation, one has to consider how to get it operationalized, the people and teams that are going to be involved, having the right training and onboarding processes in place, etc. It requires one to be able to understand and consider system-level implications while being focused on whatever task at hand at the same time. Systems-level thinking and an engineering mindset that prioritizes iterative delivery with clear understanding of tradeoffs can go a long way in being effective in this field.

First-principles thinking: The kinds of problems you’ll face and the nature of datasets you’ll work with in cybersecurity can be extremely vast and varied. You are more likely to fail if you are looking for proverbial nails to hit with your favorite AI hammer. It is very important to keep a beginner’s mindset and approach problems from the ground up with first-principles thinking. There is a very long history of AI experts who found success in other domains, confidently claiming whatever they had done before would transfer successfully into cybersecurity, only to be humbled.

What is the most valuable skill you learned in all of your different roles?

This is a hard question to answer. I would say I have been fortunate enough to have had the opportunity to work across a variety of application domains in my career. And I work in AI, which is one field that has changed, evolved and transformed tremendously over the time I have been working in [the] industry. This has forced me to constantly learn new things and keep myself abreast of skills, tools and knowledge I need to do my job well. I think this skill of “learning how to learn” is very valuable.

Are there any soft skills that make a person successful as an AI security researcher or data scientist?

Not sure if this qualifies as a soft skill, but I’d encourage newcomers and early-career professionals to seek out opportunities to work with lots of different people and teams. Cybersecurity is eventually all about people — employees that you are trying to protect, security teams you are trying to help or malicious actors you are trying to defend against. In addition to technical work, you’ll be a lot more successful and effective if you work with and understand people from diverse backgrounds, teams and roles than if you just spent all the time on a computer writing code or developing tools.

Learn more about AI cyberseciruty

More from Artificial Intelligence

How prepared are you for your first Gen AI disruption?

5 min read - Generative artificial intelligence (Gen AI) and its use by businesses to enhance operations and profits are the focus of innovation in virtually every sector and industry. Gartner predicts that global spending on AI software will surge from $124 billion in 2022 to $297 billion by 2027. Businesses are upskilling their teams and hiring costly experts to implement new use cases, new ways to leverage data and new ways to use open-source tooling and resources. What they have failed to look…

Brands are changing cybersecurity strategies due to AI threats

3 min read -  Over the past 18 months, AI has changed how we do many things in our work and professional lives — from helping us write emails to affecting how we approach cybersecurity. A recent Voice of SecOps 2024 study found that AI was a huge reason for many shifts in cybersecurity over the past 12 months. Interestingly, AI was both the cause of new issues as well as quickly becoming a common solution for those very same challenges.The study was conducted…

Does your business have an AI blind spot? Navigating the risks of shadow AI

4 min read - With AI now an integral part of business operations, shadow AI has become the next frontier in information security. Here’s what that means for managing risk.For many organizations, 2023 was the breakout year for generative AI. Now, large language models (LLMs) like ChatGPT have become household names. In the business world, they’re already deeply ingrained in numerous workflows, whether you know about it or not. According to a report by Deloitte, over 60% of employees now use generative AI tools…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today