Kubernetes 1.24 “Stargazer” landed this week, and with it, the change that’s been looming since December 2020: Dockershim is gone. Removed. No longer shipped with kubelet. If you’re running Docker as your container runtime in a Kubernetes cluster and you upgrade to 1.24 without migrating, your nodes will fail to start.
This has been one of the most communicated deprecations in Kubernetes history, and yet I guarantee there are teams out there who will be caught off guard. Having been through enough infrastructure migrations to know the pattern, “we’ll deal with it later” has a way of becoming “why is production down?”
What Dockershim Actually Was#
To understand why this matters, you need to understand the architecture. Kubernetes doesn’t run containers directly — it delegates to a Container Runtime Interface (CRI) compliant runtime. The CRI specification defines how kubelet communicates with whatever actually manages containers on the node.
Docker was never CRI-compliant. Docker has its own API, its own daemon, its own way of doing things. So the Kubernetes project maintained Dockershim — a translation layer that sat between kubelet’s CRI calls and Docker’s API. It worked, but it was an ongoing maintenance burden for the Kubernetes project, adding complexity and potential failure modes.
The original deprecation announcement was careful to explain that this didn’t mean Docker images would stop working. OCI images are OCI images regardless of what built them. You can still use docker build to create your images. You can still push them to any registry. The only thing changing is what runs those images on the Kubernetes node.
The CRI Alternatives#
The two primary CRI-compliant runtimes are containerd and CRI-O. Both are mature, well-tested, and honestly better suited to running containers in a Kubernetes context than Docker ever was.
containerd is the most natural migration path because it was literally extracted from Docker. Docker itself uses containerd under the hood — when you run Docker on your laptop, containerd is doing the actual container management. By using containerd directly, you’re cutting out the middle layer (the Docker daemon) and talking straight to the component that does the work. Less overhead, fewer moving parts, same container execution.
CRI-O was purpose-built for Kubernetes. It implements exactly the CRI specification and nothing more. It’s the “do one thing well” option. Red Hat and the OpenShift ecosystem lean heavily on CRI-O, and it’s proven itself in production at massive scale.
Both options support the same OCI image format, the same container lifecycle management, and the same security features. The migration is primarily a node-level infrastructure change, not an application-level one.
The Migration Path#
If you’re running a managed Kubernetes service (EKS, GKE, AKS), you’re probably already fine. Most managed providers migrated their default runtime to containerd months or even years ago. GKE defaulted to containerd since 1.19. EKS moved to containerd as the default in 1.24 AMIs. Check your node configuration, but odds are good you’re covered.
Self-managed clusters are where the work lives. Here’s the practical checklist:
Identify affected nodes. Check what runtime each node is using:
kubectl get nodes -o wideshows the container runtime in the last column.Test with containerd first. Spin up new nodes with containerd, deploy your workloads, and verify everything works. Pay special attention to:
- Anything that mounts the Docker socket (
/var/run/docker.sock) - DaemonSets that interact with the container runtime
- Monitoring tools that use the Docker API for metrics
- Log collection that depends on Docker’s logging driver
- Anything that mounts the Docker socket (
Migrate node pools. Cordon, drain, reconfigure, uncordon. Standard rolling update procedure. If you’re using infrastructure-as-code (and you should be), update your node templates.
Update your tooling.
docker execinto a running pod won’t work anymore on the node level. Usekubectl execinstead — which you should have been doing anyway. Tools likecrictlprovide direct access to the CRI runtime if you need node-level debugging.
The Docker Socket Problem#
The biggest practical issue I’ve seen teams hit is Docker socket mounting. It’s been a common pattern to mount /var/run/docker.sock into pods that need to build images or manage containers — CI/CD runners being the classic example.
With Dockershim gone, there’s no Docker socket to mount. If you’re running Jenkins agents, GitLab runners, or custom CI pipelines that build Docker images inside Kubernetes, you need an alternative:
- Kaniko: Builds container images in Kubernetes without Docker. No daemon, no privileges. My personal recommendation for most use cases.
- Buildah: Daemonless container building from Red Hat. Works well with Podman.
- BuildKit: Docker’s improved build engine, can run as a standalone service.
Each has tradeoffs, but all of them are more secure than mounting the Docker socket, which was always a significant security risk.
My Take#
This is one of those changes that’s been coming for so long that the actual event feels anticlimactic. The Kubernetes project handled the communication well — two years of warnings, detailed migration guides, and clear timelines. If you’re caught off guard, that’s on you.
But I think this moment is symbolically important. Docker revolutionized how we think about application packaging and deployment. It deserves enormous credit for making containers accessible. But the container ecosystem has outgrown any single tool, and the standardization around OCI and CRI means we’re no longer dependent on any one implementation.
I’ve been running containerd in my clusters since Kubernetes 1.20, and honestly, you forget it’s there — which is exactly what you want from infrastructure. It’s faster, uses less memory, and has fewer failure modes than the Docker daemon.
If you haven’t migrated yet, this week is the week. The path is well-worn and the tooling is ready. Don’t let this be the migration you do under pressure at 2 AM.
