Skip to main content
  1. Blog/

HTTP/2 Rapid Reset — The Zero-Day That Hit Everyone

·987 words·5 mins
Osmond van Hemert
Author
Osmond van Hemert
Breaches & Zero-Days - This article is part of a series.
Part : This Article

On October 10th, Google, Cloudflare, and Amazon Web Services jointly disclosed CVE-2023-44487, dubbed the “HTTP/2 Rapid Reset” attack. This isn’t your typical vulnerability announcement — it was actively exploited in the wild before disclosure, generating the largest DDoS attacks ever recorded. Google reported seeing attacks peaking at 398 million requests per second. Cloudflare recorded 201 million requests per second.

Let those numbers sink in. Hundreds of millions of requests per second from relatively modest botnets. That’s the kind of amplification factor that changes the threat landscape.

How It Works
#

The vulnerability is elegant in its simplicity, which is what makes it so dangerous. It exploits a fundamental feature of the HTTP/2 protocol: stream multiplexing and the RST_STREAM frame.

In HTTP/2, a single TCP connection can carry multiple concurrent streams. A client opens a stream by sending a request, and can cancel that stream by sending a RST_STREAM frame. This is normal, expected behaviour — it’s how browsers cancel requests when you navigate away from a page, for example.

The attack works by rapidly opening new streams and immediately cancelling them with RST_STREAM. The key insight is that the server has to do work to process each stream — parsing headers, allocating resources, potentially hitting the application layer — but the client can cancel the stream before the server responds. The client never has to wait for or process responses, keeping its own resource usage minimal.

Because HTTP/2 servers typically allow hundreds of concurrent streams per connection, and because the RST_STREAM cancellation means the client never hits the concurrent stream limit (cancelled streams don’t count), an attacker can generate an enormous request rate with very little bandwidth.

This is fundamentally different from traditional HTTP flood attacks. With HTTP/1.1, each request-response cycle consumes resources on both sides. With this HTTP/2 exploitation, the asymmetry is heavily in the attacker’s favour.

Why This Is a Protocol-Level Problem
#

What makes CVE-2023-44487 particularly concerning is that it’s not a bug in any specific implementation — it’s a consequence of how the HTTP/2 protocol was designed. Every HTTP/2 server is potentially affected: nginx, Apache, IIS, Node.js, Go’s net/http, Envoy, HAProxy, and many more.

Each implementation needs its own mitigation, and the fixes vary. Some servers now limit the rate of RST_STREAM frames. Others track the ratio of reset streams to completed streams and close connections that exceed a threshold. There’s no single patch that fixes everything.

This is reminiscent of other protocol-level issues we’ve seen over the years — slowloris attacks against HTTP/1.1, the TCP SYN flood, or more recently the various TLS renegotiation attacks. When the vulnerability is in the protocol design rather than the implementation, the fix is inherently messier.

What You Need to Do
#

If you run any HTTP/2-facing infrastructure (and you almost certainly do), here’s the priority list:

1. Patch your web servers and proxies. Nginx released patches, Cloudflare and AWS have mitigated on their platforms, and most major HTTP/2 implementations have issued updates. Check your specific stack.

2. Review your load balancer and reverse proxy configurations. If you terminate HTTP/2 at a load balancer or CDN, make sure that layer is patched. If you’re behind Cloudflare, AWS CloudFront, or Google Cloud CDN, you’re likely already protected — but verify.

3. Check your application servers. Even if you terminate HTTP/2 at a reverse proxy, some architectures pass HTTP/2 through to the application server. If your Node.js, Go, or Java application handles HTTP/2 directly, it needs to be updated.

4. Monitor for unusual patterns. Look for connections with high stream reset rates. A legitimate client rarely opens and immediately cancels hundreds of streams. Implementing rate limiting on RST_STREAM frames at the connection level is a reasonable defensive measure.

5. Consider your HTTP/2 settings. The SETTINGS_MAX_CONCURRENT_STREAMS parameter can limit exposure, though setting it too low affects legitimate performance. Finding the right balance depends on your traffic patterns.

The Bigger Picture
#

This vulnerability highlights a tension in protocol design that I’ve thought about for years. HTTP/2 was designed for performance — multiplexing, header compression, server push — and those features are genuinely valuable. But every new feature is a new attack surface, and the interaction between features (multiplexing + stream cancellation) can create unexpected vulnerabilities.

We’re seeing the same pattern with HTTP/3 and QUIC. More complexity means more potential for exploitation. I’m not saying we should stick with HTTP/1.1 forever — the performance benefits of modern protocols are real and important. But we need to be more rigorous about adversarial analysis during protocol design.

The coordinated disclosure between Google, Cloudflare, and AWS is a positive example of how the industry should handle these situations. These companies are competitors in almost every other dimension, but when it comes to protocol-level vulnerabilities, they recognised the need to work together. The attacks were being observed in August and September, and the coordinated response gave major infrastructure providers time to deploy mitigations before the public disclosure.

My Take
#

In my thirty years in this industry, I’ve seen plenty of “record-breaking” DDoS attacks. What makes this one different is the efficiency of the attack. You don’t need a massive botnet — the protocol amplification does the heavy lifting. That lowers the barrier for attackers significantly.

If you haven’t patched yet, do it today. Not tomorrow, not next sprint — today. The attack is already being used in the wild, the technique is now public knowledge, and exploitation tools will only proliferate.

The silver lining is that this is fixable at the implementation level, even if the underlying protocol design is the root cause. Rate limiting RST_STREAM frames is a reasonable mitigation that doesn’t significantly impact legitimate traffic. But it requires action from everyone running HTTP/2 infrastructure, which is essentially everyone running a web service in 2023.

Stay safe out there. Patch your servers, watch your traffic, and remember: the protocols we rely on are only as secure as their most creative adversarial analysis.

Breaches & Zero-Days - This article is part of a series.
Part : This Article