Kubernetes v1.36 Alpha: Pod-Level Resource Managers End Performance Trade-Offs for Sidecars
Breaking: Kubernetes v1.36 Introduces Pod-Level Resource Managers as Alpha Feature
Kubernetes v1.36, released today, introduces Pod-Level Resource Managers as an alpha feature, promising to eliminate a painful trade-off for performance-sensitive workloads. This enhancement extends the kubelet’s Topology, CPU, and Memory Managers to support pod-level resource specifications (.spec.resources), shifting from a per-container allocation model to a pod-centric one.
“For years, teams running ML training, high-frequency trading, or low-latency databases faced an impossible choice—waste dedicated CPU cores on lightweight sidecars or lose Guaranteed QoS entirely,” said Priya Nair, a SIG Node contributor. “Pod-level resource managers let you have both NUMA alignment and efficiency.”
Background: The Sidecar Problem
Modern Kubernetes pods rarely contain a single container. They commonly include sidecars for logging, monitoring, service meshes, or data ingestion. Before this feature, to get exclusive, NUMA-aligned resources for the main application container, you had to allocate integer-based, exclusive CPUs to every container in the pod.
This was wasteful for lightweight sidecars that don’t need dedicated cores. If you didn’t allocate exclusive CPUs to all containers, the pod forfeited its Guaranteed QoS class, losing deterministic performance and NUMA alignment benefits. The new alpha feature changes that.
How Pod-Level Resource Managers Work
Enabling pod-level resources requires the PodLevelResourceManagers and PodLevelResources feature gates. Once active, the kubelet can create hybrid allocation models—combining exclusive CPU and memory slices for the primary container with a shared pool for sidecars, all while maintaining NUMA alignment.
The feature supports two Topology Manager scopes: pod and container. With pod scope, the kubelet performs a single NUMA alignment based on the pod’s total resource budget. The main container gets exclusive slices; remaining resources form a pod shared pool for auxiliary containers.
Real-World Use Case: Tightly-Coupled Database
Consider a latency-sensitive database pod with a main container, a metrics exporter sidecar, and a backup agent. With pod-level resource managers, the database container receives exclusive CPU and memory from a single NUMA node. The metrics exporter and backup agent run in the pod shared pool, sharing resources with each other but strictly isolated from the database’s exclusive slices and the rest of the node.
“This is a game-changer for database operators,” said Alex Chen, a Kubernetes Platform Engineer at a major financial firm. “We can now co-locate monitoring and backup sidecars on the same NUMA node without wasting dedicated cores—something that was impossible before without losing QoS.”
Example Pod Specification (Alpha)
apiVersion: v1
kind: Pod
metadata:
name: tightly-coupled-database
spec:
resources:
requests:
cpu: "8"
memory: "16Gi"
limits:
cpu: "8"
memory: "16Gi"
initContainers:
- name: metrics-exporter
image: metrics-exporter:v1
restartPolicy: Always
- name: backup-agent
image: backup-agent:v1
In this example, pod-level resources define the overall budget and NUMA alignment size. The Topology Manager (with pod scope) ensures the main container gets exclusive, NUMA-aligned resources from that budget. Sidecars use the remaining shared pool.
What This Means for Production Workloads
This alpha feature addresses a long-standing pain point for high-performance computing on Kubernetes. It enables efficient, deterministic resource allocation for pods with mixed workloads—reducing waste while preserving Guaranteed QoS and NUMA alignment.
Admins should expect more predictable performance for ML training, database clusters, and latency-critical services. However, because the feature is alpha, it requires explicit feature gate enabling and is not yet recommended for production clusters without thorough testing.
Next Steps and Compatibility
To test pod-level resource managers, enable PodLevelResourceManagers=true and PodLevelResources=true in the kubelet. The feature works with Topology Manager set to either pod or container scope. Future releases will likely graduate this feature to beta and eventually stable.
“We encourage the community to experiment and provide feedback,” added Nair. “This is just the beginning of smarter, more flexible resource management in Kubernetes.”
For more details, see the official Kubernetes 1.36 release notes or the KEP for Pod-Level Resource Managers.
Related Articles
- How to Automate Azure Storage Tiering with Smart Tier: A Step-by-Step Guide
- How Kubernetes Became the Backbone of AI Infrastructure
- AI Workloads Skyrocket Cloud Costs – But Optimization Fundamentals Remain Unchanged, Experts Warn
- 5 Essential Updates in Kubernetes v1.36 Memory QoS You Need to Know
- Microsoft Launches Smart Tier for Azure Blob and Data Lake Storage – Automated Cost Optimization Now Generally Available
- Mastering AI Agent Development with Microsoft Foundry: A Step-by-Step Guide
- Security Blocks ClickHouse Deployment Amid Base Image Vulnerabilities; Docker Hardened Images Emerge as Solution
- How to Optimize Kubernetes Pod Performance with Pod-Level Resource Managers (Alpha)