Skip to main content
  1. Blog/

Kubernetes 1.26 — Electrifying the Platform

·935 words·5 mins
Osmond van Hemert
Author
Osmond van Hemert
Kubernetes & Containers - This article is part of a series.
Part : This Article

Kubernetes 1.26, codenamed “Electrifying,” dropped yesterday, and while it’s not the kind of release that generates breathless headlines, it’s packed with meaningful improvements for teams running production clusters. After three releases per year for several years now, the Kubernetes project has found a rhythm of steady, incremental progress — which is exactly what you want from infrastructure software.

I’ve been running Kubernetes in production since the 1.9 days, and what strikes me about recent releases isn’t any single feature but the maturity of the project’s priorities: better defaults, improved stability, and cleaning up technical debt. Let’s look at what matters in 1.26.

CEL for Admission Control: A Big Deal
#

The feature I’m most excited about is the graduation of Common Expression Language (CEL) for admission webhooks to beta. If you’ve managed Kubernetes at scale, you know that admission webhooks are simultaneously essential and operationally painful. Every mutating or validating webhook adds latency to the API server and introduces a dependency that can bring cluster operations to a halt if the webhook service is unavailable.

CEL-based validation admission policies let you express validation rules directly in the API server without deploying external webhook services. Instead of running an OPA Gatekeeper pod or a custom webhook deployment, you can write validation logic as CEL expressions in a ValidatingAdmissionPolicy resource.

For example, enforcing that all Deployments have resource limits set, which previously required either a webhook or a policy engine, can now be expressed as a CEL expression evaluated in-process by the API server. No external dependencies, no additional latency, no availability concerns.

This doesn’t replace OPA or Kyverno for complex policy scenarios — CEL expressions are intentionally limited in scope. But for the 80% of admission policies that are straightforward validation checks, this is a meaningful simplification of the operational model.

Storage Improvements
#

Kubernetes storage continues to mature with several notable changes in 1.26.

Cross-namespace volume data sources moves to alpha, allowing PersistentVolumeClaims to reference data sources in different namespaces. This addresses a real workflow limitation — for example, creating a development PVC from a production snapshot without copying the snapshot to the development namespace first.

The CSI driver migration project continues its march toward completion. In 1.26, the in-tree Azure Disk and Azure File drivers are marked for migration to their CSI equivalents. If you’re running on Azure, you should be planning your migration if you haven’t already. The in-tree storage drivers have been deprecated for a while, and each release moves closer to their removal.

Retroactive default StorageClass assignment graduates to beta. Previously, if you created a PVC without specifying a StorageClass and no default was set, the PVC would remain unbound even after you later designated a default StorageClass. Now, the system retroactively assigns the default StorageClass to unbound PVCs. It’s a small quality-of-life improvement that eliminates a confusing failure mode.

Cleaning House: Removals and Deprecations
#

Every Kubernetes release removes deprecated features, and 1.26 has several notable ones.

The CRI v1alpha2 API is removed, meaning container runtimes must implement CRI v1. This shouldn’t affect anyone on recent versions of containerd or CRI-O, but if you’re running older runtime versions, this is your forcing function to upgrade.

The deprecated in-tree FlowSchema and PriorityLevelConfiguration API versions are removed. The dynamic kubelet configuration feature gate is removed entirely. And several beta APIs that have been superseded by GA equivalents are cleaned up.

I appreciate this housekeeping, even though it sometimes causes upgrade friction. A project the size of Kubernetes can’t afford to accumulate legacy APIs indefinitely. Every deprecated API that lingers is a maintenance burden and a source of confusion for newcomers trying to understand which approach is current.

Scheduling Enhancements
#

The scheduler sees some useful improvements. PodSchedulingReadiness graduates to alpha, introducing a .spec.schedulingGates field that lets external controllers prevent pods from being scheduled until certain conditions are met. The use cases include batch scheduling systems that want to ensure all pods in a job can be placed before any of them start, or quota systems that need to approve resource consumption before scheduling proceeds.

NodeInclusionPolicies for topology spread constraints give you finer control over which nodes are considered when calculating topology spread. You can now exclude tainted nodes from the spread calculation, which is a common request for clusters with mixed node types.

Upgrade Considerations
#

If you’re planning your 1.26 upgrade, a few things to watch for:

The deprecation of several beta APIs means you should run kubectl deprecations (or your cluster management tool’s equivalent) before upgrading. Manifests referencing removed APIs will fail to apply after the upgrade.

The CRI v1alpha2 removal means your container runtime must be reasonably current. Containerd 1.6+ and CRI-O 1.24+ implement CRI v1.

As always, test the upgrade in a staging environment first. The release notes are comprehensive and worth reading in full.

My Take
#

Kubernetes 1.26 is a “good infrastructure release” — it improves the platform’s operational characteristics without introducing unnecessary complexity. CEL-based admission policies alone justify attention, as they address one of the most common sources of cluster operational issues.

The project’s discipline around deprecation and removal is commendable. It’s tempting for open-source projects to keep deprecated features forever to avoid breaking users, but that path leads to unmaintainable software. The Kubernetes deprecation policy — clear timelines, multiple release warnings, and tooling to detect usage — is a model other projects should study.

If you’re running Kubernetes in production, plan your 1.26 upgrade for early next year. If you’re evaluating Kubernetes, the CEL admission policies and improved storage story make the operational model meaningfully more approachable than it was even a year ago.

Kubernetes & Containers - This article is part of a series.
Part : This Article