Skip to main content
  1. Blog/

The Bitnami Docker.io Deletion — When Your Infrastructure Disappears Overnight

·922 words·5 mins
Osmond van Hemert
Author
Osmond van Hemert
Table of Contents
Kubernetes & Containers - This article is part of a series.
Part : This Article

If you woke up this week to broken deployments and scrambled to figure out why your containers wouldn’t pull, you weren’t alone. Broadcom quietly deleted the docker.io/bitnami namespace from Docker Hub, taking with it one of the most widely-used collections of pre-built application containers in the ecosystem. For teams running Redis, PostgreSQL, WordPress, Kafka, or any of dozens of other services via Bitnami images, this was a very bad morning.

The news spread rapidly across developer communities, with the official Broadcom announcement doing little to soften the blow. Let me walk through what happened, why it matters, and what you should be doing differently.

What Actually Happened
#

Bitnami has been a staple of the container ecosystem for years. Their pre-packaged, regularly updated images made it trivially easy to spin up complex services. Need a properly configured Kafka cluster? docker pull bitnami/kafka and you were off. When Broadcom acquired VMware (which had previously acquired Bitnami), the writing was on the wall for anyone paying attention — but the speed and completeness of this deletion caught many off guard.

The images weren’t deprecated with a sunset timeline. They weren’t moved to a new namespace with redirects. They were simply… gone. If your Kubernetes manifests, Docker Compose files, or CI/CD pipelines referenced docker.io/bitnami/*, they stopped working. No warning, no migration path announced beforehand.

This is particularly painful because Bitnami images had become something of a de facto standard. Helm charts across the ecosystem default to Bitnami images. Internal documentation at countless companies says “just use the Bitnami image.” I’ve personally recommended them in architecture reviews for years.

The Registry Dependency Problem
#

This incident exposes a fundamental fragility in how most organizations manage their container supply chain. We’ve collectively built an infrastructure pattern where thousands of production systems depend on the continued existence and availability of specific image tags on a third-party registry.

Think about it: your production Kubernetes cluster, running your revenue-generating application, depends on being able to pull an image from a namespace controlled by a corporation that may decide at any moment to restructure, rebrand, or simply delete things. This isn’t a theoretical risk anymore.

The mitigation isn’t complicated, but it requires discipline:

  1. Run a private registry — Harbor, GitLab Container Registry, AWS ECR, Azure ACR. Mirror every external image you depend on.
  2. Pin image digests, not tagsbitnami/redis:7.2 is a moving target. bitnami/redis@sha256:abc123... is immutable (assuming the registry keeps it).
  3. Build your own base images — Yes, it’s more work. But you control the supply chain entirely.
  4. Treat container images like vendored dependencies — You wouldn’t let your Go modules or npm packages disappear from under you (well, we learned that lesson too).

I’ve been running a Harbor instance for my own projects for a few years now, and I mirror every upstream image I use. It adds maybe 30 minutes to my setup process for a new project, but events like this validate that investment completely.

The Broadcom Effect
#

Let’s zoom out a bit. This isn’t an isolated incident — it’s part of a pattern. Since Broadcom’s acquisition of VMware closed, the company has been aggressively restructuring, re-licensing, and consolidating. VMware licensing changes have already pushed many organizations to evaluate alternatives. The Bitnami deletion is another data point in the same trend.

When a company focused on maximizing acquisition value takes over developer-beloved tools, the tools often suffer. We’ve seen this play out with Oracle and Java, with IBM and Red Hat (though that’s been more nuanced), and now with Broadcom and the VMware ecosystem.

The broader lesson is about organizational dependency. Open source software is only as reliable as the entity hosting it. The code itself might be free, but the distribution infrastructure — registries, package managers, CDNs — represents real costs that somebody has to bear. When the entity bearing those costs changes priorities, you feel it.

What Teams Should Do Right Now
#

If you’re still recovering from this week’s outage, here’s the immediate action plan:

Short term: Find where Bitnami images were being used. Check your Dockerfiles, Compose files, Helm charts, and CI pipelines. Broadcom has indicated that images will be available through their own registry, so update your references accordingly — but don’t just point to the new location and call it done.

Medium term: Set up image mirroring. Every external image your production systems use should be cached in a registry you control. This is non-negotiable for any serious deployment.

Long term: Evaluate whether you actually need pre-built images at all. For many services, building from the official upstream image (e.g., the official redis image on Docker Hub, maintained by the Docker community) is just as easy and removes the Bitnami dependency entirely.

My Take
#

I’ll be honest — I’m annoyed but not surprised. The consolidation of open source tooling under large corporate umbrellas has been accelerating, and the incentives don’t align with long-term community stewardship. Bitnami was enormously valuable precisely because it was reliable and consistent. That reliability was always contingent on someone choosing to maintain it.

The container ecosystem has matured enough that we should treat image registries with the same skepticism we treat any external dependency. Mirror everything. Pin everything. Trust nothing you don’t control.

Thirty years in this industry has taught me one thing above all: the infrastructure you depend on will eventually be pulled out from under you. Plan accordingly.

This post is part of the Infrastructure Notes series, where I cover the tools, platforms, and practices that keep our systems running — or don’t.

Kubernetes & Containers - This article is part of a series.
Part : This Article

Related