Formerly known as Wikibon

Cybersecurity: Evolution of the AI Threat — 3 Stages to Watch in 2024

In this episode of The SecurityANGLE, our weekly webcast focused on all things security, today’s conversation centers on cybersecurity and the evolution of the AI threat and three stages we think are important to be aware of and account for in 2024. I’m joined by my co-host and fellow analyst and member of the Cube Collective community, Jo Peterson. This is the first in a short series of conversations about security, trends, the AI threat, and what we see ahead.

So, AI and the evolution of AI: who’s not thinking about, and who’s not talking about this these days? In the cybersecurity space, there’s much attention, and righfully so, on the dangers AI poses and how to combat those. We’ve spent some time talking in the last couple of weeks about the AI threat what we think people need to be thinking about for 2024 from a cybersecurity standpoint, and we’re excited to bring this multi-part series to you.

Today, we talk about three stages of the evolution of the AI threat. To set the stage here a little bit, it is not hyperbole in any way to say that AI changes everything, and it is going to continue to change everything. I think that what we’re experiencing now, and what we’ve got ahead, is as much of a revolution as it is an evolution. We expect the AI revolution to have as much of an impact on society, both from a personal and a business standpoint, as the evolution of the internet.

When you think about it, for those of us who’ve been around a while, the internet changed everything and we watched it happen. Many of us can characterize our lives in pre-internet days and post-internet days and, and our reality is the internet really did change everything. AI is going to revolutionize the world in a similar and likely much bigger way. That’s why we think it’s critically important to think about it from a cybersecurity standpoint. That means the AI threat is important to understand, to plan for, and to execute against. We have identified three stages that we think will see the AI threat unfold, and at the top of the list is that we know that human threat actors are going to be increasingly augmented by AI capabilities.

Three Stages of the Unfolding Cybersecurity AI Threat to Watch in 2024

We have identified three stages in which we believe we’ll see the AI threat unfolding. The first is that we’re seeing human threat actors take the stage, increasingly augmented by and with AI capabilities. These capabilities will act as a force multiplier, rapidly extending the reach and technical capabilities attackers can wield. Playing a role here are Weak AI (also known as Narrow AI) and Strong AI (also known as Artificial General Intelligence (AGI) or Artificial Super Intelligence (ASI)), which we explore more fully in the discussion.

The second part of our conversation on the issue of AI threat centered on the fact that AI has become part of the cybercriminal’s toolbox and, whether it’s being utilized on existing threat vectors or creating new attack vectors based on the quality of results of generative AI, threat actors aren’t waiting around to take advantage of the opportunities leveraging AI has created. For some perspective here, according to Harvard Business Review, cybercrime cost businesses more than $10 billion in the U.S. alone last year. In the 2022 Official Cybercrime Report done in partnership with eCentire, Cybersecurity Ventures predicted ransomware will cost its victims some $265 billion by 2031 — an astonishing 815 times more than the $325 million that organizations spent combating ransomware in 2015. Nine years, an 815 percent increase. Here are some interesting data points from the Cybersecurity Ventures report on the predicted cost of global cybercrime in 2023:

AI Threat - Data from eSentire/Cybersecurity Ventures on global cybercrime

Image credit: eSentire/Cybersecurity Ventures

Cybersecurity Vendors Working to Combat AI Threat

For cybersecurity vendors, integrating artificial intelligence into their solutions isn’t in any way new — they’ve been doing it for years, and those efforts have largely paid off, for them and for their customers. In this part of the show, we touched on some cybersecurity vendors whose AI-powered technology we’ve been watching, who have made some recent significant updates or inroads in AI threat protection, and who we feel are doing some impressive work, including:

Darktrace. Darktrace’s self-learning AI tech was rolled out at the end of October 2023 and is billed as “cyber security that learns you.” Designed to understand and adapt to the unique patterns of a company’s network by learning from users and devices, as well as the connections between them, this allows the proactive detection of anomalous, novel behavior that may indicate a cyberattack. It also provides real-time visibility of cloud architectures throughout an organization, delivers recommendations in a prioritized manner, and specific actions that are intended to help cybersecurity teams better manage, strengthen, and secure operations and compliance.

CrowdStrike. The CrowdStrike Falcon Go is an AI-native security solution that leverages machine learning, behavioral AI, and a custom large language model to protect endpoints, detect threats, and respond to security incidents in real time. It consolidates next-gen anti-virus protection, endpoint detection and response (EDR), and a 24/7 managed hunting service into a single lightweight agent, providing organizations with complete visibility and protection across their endpoints. Equally impressive are CrowdStrike’s partnerships with Dell (announced mid-December 2023) and Falcon Go’s availability on Amazon Business (announced in late November), designed specifically with SMBs in mind, providing them with the same level of protection afforded larger organizations.

Zscaler. Zscaler is a cloud-based security platform that provides internet security and web filtering services to some 40% of Fortune 100 companies. Zscaler’s proprietary large language models and AI continuously enhances and improves its security offerings and delivers on the company’s goal of simplification with zero trust for cybersecurity pros. The company’s LLMs are integrated with a massive data lake that handles more than 300 billion daily transactions, allowing for continuous learning and improvement of its AI models. For perspective, in one day, Google handles approximately eight or nine billion searches, so it gives you some idea of the role Zscaler plays when users are using AWS, GCP, Azure, SaaS apps and even the internet in general. Zscaler secures workloads and provides advanced AI-driven outcomes and capabilities for IT and security teams. Its fast, simple, reliable zero-trust connectivity to apps from anywhere, provision of privileged access to OT, and zero-trust connectivity for IoT and OT devices make the AI-powered Zscaler an attractive solution all the way around. The company has

SentinelOne. SentinelOne was in the news this past week announcing its acquisition of PingSafe, a cloud-native application protection platform startup. SentinelOne is all about cloud identity, data security, and workload protection and bills itself as “The first AI security platform to protect the entire enterprise.” At RSA last April, SentinelOne announced its threat-hunting platform which features AI-powered monitoring and operation of all security data. The platform also serves up the ability for security pros to use AI to ask threat-related and adversary hunting questions and get correlated results and insights from the entire ecosystem.

The addition of PingSafe to the SentinelOne family of solutions is significant, as PingSafe’s Cloud Native Application Protection Platform (CNAAP) solution delivers dynamic, real-time monitoring of multi-cloud workloads and aggregates intelligence that allows for the detection of toxic and exploitable vulnerabilities. This will nicely complement SentinelOne’s family of solutions, providing an integrated platform that affords protection across an organization’s entire cloud footprint and helps reduce risk and operational costs, maximize value in terms of data retention costs, and increase efficiencies along the way.

Tanium. Tanium is, quite simply, on fire these days. I was working with some ETR data this afternoon from the company’s 2024 survey results and was blown away by the inroads Tanium has made over the course of the last quarter. I can’t share this as it’s not yet public, but I’ll augment this statement later. The company’s autonomous endpoint management (XEM) platform powered by Tanium’s proprietary AI, launched in November of 2023, delivers in real-time and affords visibility across all endpoints, managed or unmanaged, on-prem or in the cloud, all with a goal of driving faster risk mitigation and increased operational efficiencies.

These cybersecurity vendors are definitely on our ‘ones to watch’ list. Who have we overlooked — we want to hear from you.

New Cybersecurity AI Threat Vectors Emerge

Our conversation in this episode also covered the myriad ways in which AI is enhancing existing attack vectors: phishing, smishing, and vishing, as well as helping facilitate the creation of new attack vectors. These AI threats include AI impersonation technology and AI algorithms that are able to analyze large amounts of data, such as social media profiles, to create fake profiles that are very convincing. There are some very real dangers here and, we shared a number of examples. With employees being the last line of defense against threat actors, the use of AI only makes cybersecurity awareness training, drills, and technology that can proactively identify and mitigate problem areas even more critically important.

AI and the Law

We touched on the International Association of Privacy Professionals’ year in review for AI, the FTC’s notification that existing laws, such as Section 5 of the FCT Act the Fair Credit Reporting Act and the Equal Credit Opportunity Act, apply to AI systems, and President Biden’s recently issued Executive Order on Safe, Secure, and Trustworthy AI Development and the Use of Artificial Intelligence, which recognizes the benefits of AI, but endeavors to mitigate risk.

AI Code Assistants Introduce Further Code Vulnerability

As we wrapped the show, we touched on the increased adoption of AI assistants and the role they might play in software development in instances where vulnerabilities might intentionally be written into the source code. Jo shared some interesting stats from researchers at Stanford who essentially found that users who utilized an AI assistant wrote less secure code than those without, while conversely thinking that what they were doing was actually more secure. Oops.

You can watch the full episode here:

Or stream it wherever you stream your podcasts. While you’re there, be sure and hit the “subscribe” button — and we’ll see you next time.

Note that in this series, you can expect interesting, insightful, and timely discussions, including cybersecurity news, security management strategies, security technology, and coverage of what major vendors in the space are doing on the cybersecurity solutions front. And if you’ve got something you’d like for us to cover, don’t be shy, hit us up, we’d love to hear from you. You can find us here:

Shelly Kramer on LinkedIn

Jo Peterson on LinkedIn

Shelly Kramer on Twitter

Jo Peterson on Twitter

If you’re heading to CES this week, chances are good you’ll run into my colleagues Rob Strechay and Savannah Peterson. Check out their pre-CES coverage featuring insights and predictions for the event and a glimpse into things like semiconductors and AI that are sure to dominate the event: CES is Back, and with an AI Bang.

 

 

You may also be interested in

Book A Briefing

Fill out the form , and our team will be in touch shortly.

Skip to content