Skip to main content
  1. Blog/

OpenSSL's Critical Vulnerability — Lessons From a Week of Preparation

·856 words·5 mins
Osmond van Hemert
Author
Osmond van Hemert
Breaches & Zero-Days - This article is part of a series.
Part : This Article

On Tuesday, November 1st, the OpenSSL project released version 3.0.7, patching two high-severity vulnerabilities: CVE-2022-3602 and CVE-2022-3786. Both are buffer overflow issues in the X.509 certificate verification code, specifically in the handling of punycode in email address name constraints. What made this event notable wasn’t just the vulnerabilities themselves — it was the week of anticipation that preceded them.

The Pre-Announcement That Shook the Industry
#

On October 25th, the OpenSSL project announced that a critical security fix would be released on November 1st. They didn’t say what the vulnerability was, only that it affected OpenSSL 3.0 and above and was rated “CRITICAL” — the highest severity.

The internet collectively held its breath. The last time OpenSSL had a critical vulnerability was Heartbleed in 2014, which affected an estimated 17% of the internet’s secure web servers. Organizations around the world started preparing: inventorying which systems ran OpenSSL 3.0, staging patch deployments, and drafting incident response plans.

I spent part of last week auditing our own infrastructure. The good news was that most production systems still run OpenSSL 1.1.1, which isn’t affected. The concerning discovery was how many container base images had quietly moved to OpenSSL 3.0 — several of our build containers based on Ubuntu 22.04 and Alpine 3.17 were in scope.

What The Vulnerabilities Actually Are
#

When the details emerged on November 1st, the community’s reaction was a mixture of relief and slight anticlimax. The original CRITICAL rating was downgraded to HIGH before release, after further analysis showed the vulnerabilities were harder to exploit than initially assessed.

CVE-2022-3602 is a 4-byte buffer overflow that can be triggered during X.509 certificate verification when a certificate contains a specially crafted punycode-encoded email address. CVE-2022-3786 is a related overflow in the same code path, but can only overwrite the buffer with the period character (.), limiting its exploitability.

For exploitation, an attacker would need either a malicious CA to sign a crafted certificate, or a legitimate CA to issue a certificate with the malicious payload. The victim’s application would need to be using OpenSSL 3.0+ and have certificate verification enabled. Many platforms also have stack overflow protections that would turn the exploit into a denial-of-service rather than code execution.

This significantly narrows the attack surface compared to Heartbleed, which could be exploited remotely against any server running affected OpenSSL versions. But “hard to exploit” isn’t the same as “impossible to exploit,” and patching should still be treated as urgent.

The SBOM Question
#

This event has reignited discussions about Software Bills of Materials (SBOMs) and dependency visibility. When the pre-announcement dropped, the first question every security team asked was: “Where are we running OpenSSL 3.0?”

For many organizations, answering this question was surprisingly difficult. OpenSSL is embedded in countless applications, libraries, and container images. Your Python application might use it through the ssl module. Your Node.js runtime links against it. Your Java application might use it through native TLS bindings. It’s everywhere, and tracking it requires systematic dependency management.

This is exactly the use case SBOMs are designed for. If you had generated SBOMs for your container images and deployed artifacts, you could have queried them to identify affected systems within minutes. Instead, most teams spent hours or days manually auditing their infrastructure.

The NIST SBOM guidelines and the Biden administration’s Executive Order 14028 on cybersecurity have been pushing for SBOM adoption, but adoption remains low. Events like this should be the wake-up call.

Container Image Implications
#

The container ecosystem adds a layer of complexity to OpenSSL patching. If you’re running applications in Docker containers, your OpenSSL version depends on which base image you used and when you built it. A container built from ubuntu:22.04 three months ago has a different OpenSSL version than one built today.

This is why image scanning tools like Trivy, Grype, and Snyk Container exist. They can scan your running containers and registry images for known vulnerabilities, including this one. If you’re not already running these in your CI/CD pipeline, now is the time to start.

The patch workflow for containers is: update your base images, rebuild your application containers, scan them to confirm the fix, and redeploy. For organizations with hundreds of microservices, this can take days. Automating as much of this pipeline as possible — automated base image updates via Dependabot or Renovate, automated rebuilds, automated scanning — turns a week-long scramble into a routine operation.

My Take
#

The handling of this vulnerability was, on balance, good. The pre-announcement gave organizations time to prepare. The downgrade from CRITICAL to HIGH was responsible — it reflected updated analysis rather than marketing. And the patch was released on schedule.

But the week of uncertainty also exposed how unprepared many organizations are for rapid patching. If it takes your team a week to answer “where do we run this library?” you have a visibility problem that needs addressing before the next critical CVE drops.

My recommendations coming out of this event: implement SBOM generation in your build pipeline, run container image scanning in CI/CD, and maintain an inventory of your cryptographic library dependencies. The next OpenSSL vulnerability might not be downgraded.

Breaches & Zero-Days - This article is part of a series.
Part : This Article