Skip to main content
  1. Blog/

The Zero-Day Treadmill — Why Patch Tuesday Still Matters in 2025

·831 words·4 mins
Osmond van Hemert
Author
Osmond van Hemert
Table of Contents
Breaches & Zero-Days - This article is part of a series.
Part : This Article

This week’s Patch Tuesday from Microsoft was a stark reminder that while we’re all talking about AI-powered security tools and zero-trust architectures, the fundamentals haven’t changed: unpatched vulnerabilities remain the primary attack vector for most breaches. November’s patch batch included multiple zero-day vulnerabilities under active exploitation, and the details should make every ops team sit up and pay attention.

The November Zero-Days
#

Microsoft patched over 90 vulnerabilities this month, including several that were already being exploited in the wild. The ones that concern me most are the privilege escalation vulnerabilities in the Windows kernel — CVE-2025-43629 and related issues that allow attackers who’ve gained initial access to elevate to SYSTEM privileges.

What makes these particularly dangerous is the attack chain they enable. An attacker gets initial access through a phishing email or a compromised web application — low-privilege access that might go unnoticed. Then they use these kernel vulnerabilities to escalate privileges, install persistent backdoors, and move laterally through the network. By the time your SIEM alerts fire, the damage is done.

The NTLM-related vulnerability is another one worth flagging. Despite years of deprecation warnings, NTLM authentication is still deeply embedded in enterprise environments. This particular flaw allows relay attacks that can compromise Active Directory environments — and if you think your organization has fully migrated away from NTLM, I’d encourage you to actually check. You might be surprised.

The Patch Management Paradox
#

Here’s what frustrates me after three decades in this industry: we know exactly how to prevent the majority of security incidents. Patch promptly, enforce least privilege, use multi-factor authentication, and segment your networks. These aren’t novel ideas. They’re not even particularly difficult to implement technically.

Yet organizations consistently struggle with patch management. I’ve consulted with companies running critical infrastructure on Windows servers that are months behind on patches because “we can’t afford the downtime” or “we need to test for compatibility first.” Both are legitimate concerns, but neither is an acceptable excuse when you’re running known-vulnerable software exposed to the internet.

The paradox is that the same organizations investing millions in AI-powered threat detection and extended detection and response (XDR) platforms are simultaneously running unpatched Exchange servers. It’s like installing a state-of-the-art alarm system while leaving the front door unlocked.

What Good Patch Management Actually Looks Like
#

After helping numerous organizations improve their security posture, I’ve found that effective patch management comes down to a few key principles:

Automated testing pipelines: If you can’t patch quickly because you’re afraid of breaking things, the solution isn’t slower patching — it’s better testing. Organizations that have invested in automated regression testing can deploy patches within days rather than weeks. The investment in CI/CD for your infrastructure pays dividends in security.

Tiered deployment: Not everything needs to be patched simultaneously. Critical zero-days on internet-facing systems? Same day. Internal application servers? Within a week. Legacy systems with compensating controls? Within the standard maintenance window. Having a clear tier system removes the analysis paralysis.

Compensating controls: When you genuinely can’t patch immediately — and there are legitimate cases — have compensating controls ready to deploy. Network segmentation, enhanced monitoring, temporary access restrictions. These buy you time without leaving systems fully exposed.

Visibility: You can’t patch what you don’t know about. Asset inventory sounds boring, but it’s the foundation of everything. I’ve seen organizations discover entire server farms they’d forgotten about during incident response. That’s a failure of basic hygiene, not a failure of security technology.

The Linux Side Isn’t Immune
#

While Microsoft’s Patch Tuesday gets the headlines, the Linux ecosystem had its own critical issues this month. Several privilege escalation vulnerabilities in the kernel, along with issues in commonly deployed packages, required attention. The advantage of most Linux environments is that package management and automated updates are more mature — but the disadvantage is that many Linux servers are treated as “set and forget” appliances that nobody monitors.

Container environments add another layer of complexity. Your base images might be built on a distribution version with known vulnerabilities, and unless you’re regularly rebuilding and redeploying containers, those vulnerabilities persist indefinitely. Tools like Trivy and Grype help, but only if someone’s actually looking at the output.

My Take
#

I know this isn’t a glamorous topic. Nobody gets promoted for maintaining a well-patched infrastructure — they get promoted for deploying the shiny new platform. But after thirty years of watching security incidents unfold, I can tell you that the vast majority could have been prevented with timely patching and basic security hygiene.

My challenge to every engineering leader reading this: when was the last time you audited your patch compliance? Not your policy — your actual compliance. Check your vulnerability scanner results. Look at the mean time between patch release and deployment. If that number is measured in months, you have a problem that no amount of AI-powered security tooling will solve.

The zero-day treadmill never stops. The only question is whether you’re running fast enough to stay on it.

Breaches & Zero-Days - This article is part of a series.
Part : This Article

Related