Skip to main content
  1. Blog/

Kubernetes 1.29 Mandala — Sidecars Finally Graduate

·814 words·4 mins
Osmond van Hemert
Author
Osmond van Hemert
Kubernetes & Containers - This article is part of a series.
Part : This Article

Kubernetes 1.29, codenamed Mandala, dropped this week, and while it may not have the headline-grabbing drama of an AI model launch, this release carries some changes that will genuinely affect how we build and operate containerized workloads. After managing container orchestration systems since the early Docker Swarm days, I can appreciate when a release focuses on solving real operational pain points rather than adding flashy new features.

Native Sidecar Containers: Finally
#

The headline feature for me is the promotion of native sidecar containers to beta. If you’ve ever dealt with the awkward dance of init containers, sidecar proxies, and job completion semantics in Kubernetes, you know this has been a long time coming.

The problem was straightforward but surprisingly painful: Kubernetes had no first-class concept of a container that should start before the main application container and run alongside it for the pod’s lifetime. Service meshes like Istio worked around this with init containers and lifecycle hooks, but the hacks were fragile. Jobs were particularly problematic — a Job’s pod would technically never complete because the sidecar container kept running.

With KEP-753, sidecar containers are now defined using restartPolicy: Always on init containers. They start in order before regular containers, run for the pod’s lifetime, and are properly terminated during shutdown. It’s elegant in its simplicity, and it solves a category of bugs that have plagued service mesh deployments for years.

Networking Improvements
#

The networking side of 1.29 brings several welcome changes. The nftables backend for kube-proxy has reached alpha, beginning the long-overdue migration away from iptables. If you’ve ever debugged a cluster with thousands of services and watched iptables rules balloon into an unmanageable mess, you understand why this matters. nftables offers better performance characteristics and a more maintainable rule structure.

There’s also progress on the Gateway API front, which continues its march toward becoming the standard for ingress and service mesh configuration in Kubernetes. The API’s maturity is reaching the point where I’m comfortable recommending it for new deployments over the traditional Ingress resource. The expressiveness of HTTPRoute and the multi-tenancy story with Gateway classes solve problems that Ingress never could cleanly.

Load Balancer IP Mode
#

A smaller but practical addition is the loadBalancerIPMode feature for Services, which gives you more control over how traffic from cloud load balancers reaches your pods. You can now specify whether the load balancer IP should be treated as a true VIP (routed at the network level) or as a proxy endpoint. This matters for performance-sensitive applications where the extra hop through kube-proxy was adding unwanted latency.

For those of us running Kubernetes on the major cloud providers, this kind of fine-grained control over networking behavior is exactly what we need. The default behavior works for most cases, but when you’re optimizing for the tail end of your latency distribution, these knobs matter.

Storage and Scheduling
#

On the storage front, ReadWriteOncePod access mode is now generally available. This ensures that a PersistentVolumeClaim can only be mounted by a single pod across the entire cluster — not just a single node. It’s a subtle but important distinction for stateful workloads where data corruption from concurrent access is a real risk.

The scheduler also gained improvements around pod topology spread constraints, making it easier to distribute workloads evenly across failure domains. If you’ve wrestled with getting pods spread across availability zones without leaving some zones over-provisioned and others underutilized, the refinements here should help.

The Maturity Story
#

What strikes me most about Kubernetes 1.29 isn’t any single feature — it’s the pattern. The project has clearly shifted from “add everything” to “finish what we started and make operations smoother.” Features are graduating from alpha to beta to GA at a steady pace. The rough edges that made Kubernetes painful in production three years ago are being systematically filed down.

I remember deploying Kubernetes 1.8 and needing a small army of YAML templating tools, custom operators, and tribal knowledge to keep things running. Today’s Kubernetes, while still complex, is measurably more operable. The sidecar container support alone will eliminate an entire category of support tickets for teams running service meshes.

My Take
#

Kubernetes 1.29 is a “boring in the best way” release. No revolutionary new concepts, just steady improvement in areas that matter for production workloads. The sidecar support removes a genuine pain point, the networking improvements lay groundwork for better performance, and the storage and scheduling refinements show a project that’s listening to operator feedback.

If you’re running 1.27 or 1.28, the upgrade path should be smooth — as always, test your admission webhooks and any custom controllers first. If you’re still on something older, the accumulated improvements make a compelling case for catching up. The Kubernetes ecosystem in late 2023 is mature enough that upgrades are routine rather than stressful, and that itself is a sign of how far we’ve come.

Kubernetes & Containers - This article is part of a series.
Part : This Article