Skip to main content
  1. Blog/

Post-Quantum Cryptography — The Migration Clock Is Ticking

·841 words·4 mins
Osmond van Hemert
Author
Osmond van Hemert
Cybersecurity Landscape - This article is part of a series.
Part : This Article

Last August, NIST officially published its first three post-quantum cryptography (PQC) standards: FIPS 203 (ML-KEM), FIPS 204 (ML-DSA), and FIPS 205 (SLH-DSA). What felt like an academic exercise for years is now a concrete engineering challenge. If you’re running any system that touches TLS, SSH, code signing, or certificate management — which is basically everything — the migration clock is ticking.

I’ve been through enough cryptographic transitions in my career to know how these go. They always start slow and end in a panic. This one has the added complexity of being driven by a threat that doesn’t fully exist yet: a cryptographically relevant quantum computer. But the “harvest now, decrypt later” attack vector makes waiting a genuinely bad strategy.

What Changed in Practice
#

The finalization of these standards means that vendors can now ship implementations without worrying about algorithm changes. OpenSSL 3.5 is expected to include ML-KEM support, and several cloud providers have already started offering PQC key agreement in their TLS implementations. Google has been running hybrid post-quantum key exchange in Chrome since 2024, and the latest data suggests the performance overhead is minimal for most use cases.

The real shift, though, is organizational. The U.S. government’s National Cybersecurity Strategy set a 2035 deadline for migrating federal systems to quantum-resistant cryptography. That sounds far away until you realize how many systems need to be inventoried, tested, and migrated. For anyone who’s dealt with the TLS 1.0/1.1 deprecation — which took the better part of a decade — you know that ten years isn’t as long as it sounds.

The Inventory Problem
#

Before you can migrate anything, you need to know what you have. This is where most organizations will stumble. Cryptographic agility — the ability to swap out algorithms without rewriting applications — has been a best practice for years, but in my experience, very few teams have actually implemented it properly.

The first step is a cryptographic inventory: every certificate, every key, every hardcoded algorithm reference in your codebase. Tools like IBM’s Quantum Safe Explorer and open-source projects like Cryptography Bill of Materials (CBOM) are emerging to help with this, but the reality is that most organizations have cryptographic dependencies buried in layers of abstraction they’ve never needed to think about.

I spent some time last week auditing one of our internal services. What should have been a straightforward exercise turned into a rabbit hole of transitive dependencies. A library we use for JWT validation depends on a specific key type that has no post-quantum equivalent yet. Multiply that by every service in a typical microservices architecture, and you start to see the scale of the problem.

Hybrid Approaches and Transition Strategies
#

The consensus in the security community is that hybrid key exchange — combining a classical algorithm (like X25519) with a post-quantum one (like ML-KEM-768) — is the right transitional approach. This gives you quantum resistance while maintaining a fallback if any issues are discovered in the new algorithms.

Cloudflare published excellent data on their hybrid PQC deployment showing that the additional handshake overhead is roughly 1KB and a few hundred microseconds. For most web applications, that’s negligible. But for constrained environments — IoT devices, embedded systems, real-time protocols — the picture is more complicated.

The key size increase is the real concern for constrained devices. ML-KEM-768 public keys are 1,184 bytes compared to 32 bytes for X25519. For a device doing thousands of handshakes per second, that memory and bandwidth overhead adds up. This is an area where I expect we’ll see significant innovation over the next few years.

What Should You Do Now?
#

If you’re a platform or infrastructure engineer, start with the inventory. You can’t plan a migration you can’t measure. Run your dependency scanners with crypto-aware rules, catalog your certificate infrastructure, and identify your most critical data flows.

If you’re building new systems, design for cryptographic agility from day one. Abstract your crypto operations behind interfaces that can be swapped without application changes. Use libraries that already support PQC algorithms — BoringSSL, liboqs, and PQClean all have usable implementations.

If you’re in a regulated industry — finance, healthcare, government — check your compliance frameworks. Several are already being updated to require PQC migration plans, and having a documented strategy will be expected sooner than you think.

My Take
#

I’ll be honest: the quantum threat timeline is uncertain, and there’s a real risk of premature optimization here. But the cost of starting the inventory and planning now is low compared to the cost of a rushed migration later. The organizations that handled the SHA-1 deprecation and TLS 1.2 migration smoothly were the ones that started early and moved methodically.

The worst outcome isn’t starting too early — it’s discovering in 2030 that your core banking system has hardcoded RSA-2048 in a library that hasn’t been maintained since 2019. Start the audit now. Future you will be grateful.

This is part of my ongoing Security in Practice series, where I dig into the security challenges that actually affect working engineers.

Cybersecurity Landscape - This article is part of a series.
Part : This Article