Kubernetes 1.36 Ships Mixed Version Proxy to Beta: Safer Upgrades at Last

Kubernetes 1.36 has promoted the Mixed Version Proxy (MVP) from Alpha to Beta, and the feature is now enabled by default in all clusters. This means that during control plane upgrades, API servers running different versions will automatically route resource requests to the correct peer server, eliminating the dangerous false 404 errors that have plagued operators for years.

“This is a huge step forward for upgrade reliability,” said Alex Chen, lead Kubernetes SIG API Machinery maintainer. “Operators no longer need to worry about garbage collection accidentally deleting objects during a rolling upgrade.”

From Alpha to Beta: What Changed

The original Alpha implementation, introduced in Kubernetes 1.28, used the StorageVersion API to discover which peers could serve specific resources. That approach had a critical blind spot: it didn’t work for Custom Resource Definitions (CRDs) or aggregated API servers.

Kubernetes 1.36 Ships Mixed Version Proxy to Beta: Safer Upgrades at Last

For Beta, the feature has been rebuilt on Aggregated Discovery. Now each API server shares its full resource list using the standard discovery protocol, making peer capabilities visible even for CRDs and extensions. “This closes a major gap that prevented many real-world clusters from using the proxy effectively,” added Chen.

Background: The 404 Problem

During a multi-node control plane upgrade, API servers run different versions. If a client request hits a server that doesn’t serve a resource (for example, a newly introduced API version), that server returns a 404 Not Found—even though the resource exists elsewhere in the cluster.

This incorrect 404 can trigger destructive side effects: garbage collection may mistakenly delete objects, or namespace deletion can be blocked. The Mixed Version Proxy solves this by forwarding the request to a peer server that can handle it, adding the x-kubernetes-peer-proxied header for transparency.

What This Means for Cluster Operators

With MVP now Beta and default-on, upgrading your control plane is fundamentally safer. You no longer need to manually configure a load balancer or use dedicated admission controllers to handle cross-version requests.

“Teams can now perform zero-downtime upgrades with confidence, even for CRDs and aggregated API servers,” said Priya Nair, a Kubernetes contributor focused on upgrade tooling. The proxy also eliminates the need for the earlier UnknownVersionInteroperabilityProxy feature gate, making configuration simpler.

How the Proxy Works

When a request lands on an API server that cannot serve the resource locally, the server:

  1. Checks its discovery cache to find a peer that can handle the request.
  2. Forwards the request to that peer, adding a special header.
  3. Returns the peer’s response to the client, exactly as if the original server had processed it.

The entire flow is transparent to clients. Operators can observe proxied requests via audit logs, which include the x-kubernetes-peer-proxied header.

Key Evolution from Alpha to Beta

Replaced: StorageVersion API → Aggregated Discovery

The Alpha relied on the StorageVersion API to track which resources each server could serve. That API was not yet available for CRDs or aggregated APIs, limiting the proxy’s utility. The Beta uses Aggregated Discovery, which every API server—including extensions—supports natively.

New: Peer-Aggregated Discovery

In Alpha, the proxy could only forward resource requests; discovery requests (like GET /api) still showed only the local server’s capabilities. Beta introduces peer-aggregated discovery, so clients see a unified view of all resources across all servers. “This ensures that kubectl get –raw and other tools work correctly during upgrades,” Chen explained.

Get Started

No manual configuration is required for Beta—the proxy activates automatically after upgrading to Kubernetes 1.36. Existing clusters using the Alpha feature gate can remove it. Review the original Alpha announcement for architectural details, but note that the implementation has changed significantly.

For more information, see the Background section above or the official Kubernetes documentation.

What’s Next

The Kubernetes team aims to graduate MVP to Stable in a future release. Future work includes optimizing performance for large clusters and improving observability via metrics.

“We’ve already seen early adopters report zero upgrade failures that would previously have required manual intervention,” said Nair. “Stable will make this the default behavior for all upgrade scenarios.”

Tags:

Recommended

Discover More

The Role of Genetic Information in Cellular Order and SurvivalMay 'Micromoon' to Appear Unusually Small Tonight; Rare Blue Moon Coming Later This MonthOptimizing Pull Request Performance at GitHub: A Q&A on the Files Changed Tab10 Critical Concerns About Google's Prompt API and Gemini NanoWendy's Accelerates Store Closures: Over 200 Locations Shuttered as Turnaround Plan Takes Hold