Skip to main content
  1. Blog/

RSA Conference 2024 — AI Meets Cybersecurity, For Better and Worse

·923 words·5 mins
Osmond van Hemert
Author
Osmond van Hemert
Cybersecurity Landscape - This article is part of a series.
Part : This Article

RSA Conference 2024 is underway in San Francisco this week, and if you thought last year’s event was AI-heavy, this year makes that look restrained. Walking through the expo floor (virtually — I’m following along from the Netherlands), it seems like every vendor has bolted “AI-powered” onto their product descriptions. But beneath the marketing noise, there are genuine shifts happening in how we think about security in an AI-saturated world.

The Two Sides of AI in Security
#

The conversation at RSAC this year breaks neatly into two tracks: using AI to defend, and defending against AI. Both are maturing rapidly, but at very different rates.

On the defensive side, AI-powered threat detection has moved well past the “anomaly detection” buzzword phase. Companies like CrowdStrike and Palo Alto Networks are demonstrating systems that can correlate signals across endpoints, network traffic, and cloud workloads in ways that would take human analysts hours or days. The CrowdStrike Charlotte AI assistant, for instance, lets security analysts query their threat data in natural language — think “show me all lateral movement attempts in the last 48 hours involving service accounts” — and get actionable results.

Having spent years dealing with SIEM alert fatigue, I can tell you this kind of capability is genuinely transformative. The bottleneck in security operations has never been data collection; it’s been making sense of the data fast enough to act on it. Large language models are remarkably good at this translation layer between raw telemetry and human decision-making.

On the offensive side, the picture is more sobering. AI-generated phishing emails are already measurably more effective than traditional ones. Deepfake audio is being used in business email compromise attacks — or rather, business voice compromise attacks. And the barrier to entry for creating sophisticated attack tools continues to drop. A talk at this year’s conference demonstrated how an attacker with moderate skills could use publicly available LLMs to generate polymorphic malware that evades traditional signature-based detection.

The Software Supply Chain Keeps Everyone Up at Night
#

If there’s one topic that rivals AI for floor time at RSAC 2024, it’s software supply chain security. The echoes of SolarWinds, Log4j, and the recent xz Utils backdoor discovery are still reverberating through the industry.

The xz Utils incident, which came to light just a few weeks ago, is particularly chilling because it wasn’t a vulnerability in the traditional sense — it was a deliberate, patient, multi-year social engineering campaign to compromise a critical open-source library maintainer and insert a backdoor. It’s the kind of attack that makes you question every dependency in your stack.

Several RSAC sessions are focused on practical responses: improving SBOM (Software Bill of Materials) tooling, implementing more rigorous code signing practices, and establishing better processes for vetting open-source contributors. CISA’s continued push for Secure by Design principles is getting strong representation, and there’s growing momentum around making software manufacturers accountable for the security of their products.

Zero Trust Is Finally Just “Security”
#

I’ve been following the zero trust conversation for the better part of a decade, and this might be the first year at RSAC where it doesn’t feel like a marketing category anymore. It’s just… how you do security now. The perimeter is dead. Identity is the new perimeter. Every request is verified. These aren’t revolutionary statements anymore; they’re baseline assumptions.

What’s more interesting is the implementation maturity. Companies are moving beyond “we deployed a zero trust network access (ZTNA) product” to genuinely rethinking their security architectures around continuous verification. The integration between identity providers, device trust signals, and application-level authorization is getting more seamless, and frameworks like NIST SP 800-207 are being adopted as practical blueprints rather than aspirational documents.

The Talent Gap Hasn’t Closed
#

Despite all the AI automation talk, the cybersecurity talent shortage remains acute. ISC2’s latest estimates put the global shortage at around 4 million professionals. The irony isn’t lost on anyone: we’re building AI tools to augment security teams partly because we can’t hire enough humans to do the work.

Several RSAC sessions address this through the lens of upskilling — using AI not just as a force multiplier for existing analysts, but as a training tool for junior staff. The idea of an AI copilot that explains its reasoning, teaches analysts about attack patterns, and helps them develop intuition faster is compelling. Whether it works in practice remains to be seen, but the intent is sound.

My Take
#

RSAC is always a mix of genuine insight and vendor theater, and 2024 is no exception. But if I had to distill the meaningful signal from this year’s conference, it would be this: the security industry is finally grappling with AI as a dual-use technology in a serious way.

The organizations that will fare best are the ones that invest in AI-powered defenses while simultaneously hardening their systems against AI-powered attacks. That means red-teaming your defenses against AI-generated threats, treating your software supply chain as a first-class security domain, and accepting that the threat landscape is evolving faster than any single product can address.

For practitioners, my takeaway is practical: if you haven’t looked at your supply chain security posture since the xz Utils incident, now is the time. Update your threat models to include AI-generated social engineering. And if your organization is still treating zero trust as a future initiative rather than a current priority, you’re behind.

The adversaries are already using AI. The question isn’t whether we should too — it’s whether we can do it thoughtfully enough to stay ahead.

Cybersecurity Landscape - This article is part of a series.
Part : This Article