Kubernetes 1.34 in Practice: Five Changes You Can Adopt Now

Kubernetes 1.34 landed on August 27, 2025, and the first patch (1.34.1) followed on September 9. The release includes 58 enhancements, with 23 graduating to stable—plenty of practical wins for day‑2 operations. If you’re deciding what to try first, this guide focuses on five changes that are easy to understand, test, and gain value from quickly. (kubernetes.io)

Why this release matters

A few themes stand out in 1.34:

Below, I’ll explain what each change does, why you should care, and how to try it safely.


1) The control plane breathes easier: snapshots + streaming lists

If you’ve ever watched your API server memory spike during a big list operation, this one’s for you. In 1.34, the “snapshottable API server cache” went beta and is enabled by default. Combined with earlier work—consistent reads from the API server cache (1.31) and streaming list responses (1.33)—the control plane can now serve most read requests directly from memory and stream them out without building huge buffers. That means fewer surprises under load and less pressure on etcd. (kubernetes.io)

What to do:

There’s nothing to turn on here—just upgrade and enjoy the smoother ride. (kubernetes.io)


2) End‑to‑end traces you can actually use (kube‑apiserver + kubelet)

Kubernetes components have supported tracing for some time, but 1.34 promotes tracing for the API server and kubelet to stable, with clean, versioned configuration. Both components export OpenTelemetry (OTLP) traces and default to an OTLP collector at localhost:4317 if you don’t specify an endpoint. Even better, kubelet propagates trace context to the container runtime (for example containerd or CRI‑O), so you can see a Pod’s lifecycle across control plane, node, and runtime in a single trace. (kubernetes.io)

How to try it:

# kube-apiserver (config file referenced with --tracing-config-file)
apiVersion: apiserver.config.k8s.io/v1
kind: TracingConfiguration
# endpoint defaults to localhost:4317 if omitted
samplingRatePerMillion: 5000

# kubelet (KubeletConfiguration snippet)
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
tracing:
  # endpoint defaults to localhost:4317 if omitted
  samplingRatePerMillion: 5000

Once enabled, you can follow a request end‑to‑end: admission webhooks, etcd calls, and node‑level CRI operations appear as linked spans with shared context. It’s a big quality‑of‑life improvement for debugging slow starts and odd node behavior. (kubernetes.io)


3) Linux swap goes stable—use it carefully

Swap support for Linux nodes has been a long journey in Kubernetes. With 1.34, node swap is stable with a “LimitedSwap” mode that keeps behavior predictable: only Burstable QoS Pods can use swap, and the amount allowed is bounded automatically based on each container’s memory request and the node’s capacity. Kubernetes still defaults to NoSwap, so you opt in per node. (kubernetes.io)

How to enable on a Linux node (cgroup v2 required): 1) Provision swap on the host (file or partition), ensure it’s active at boot. 2) Configure kubelet:

# KubeletConfiguration
failSwapOn: false
memorySwap:
  swapBehavior: LimitedSwap

3) Restart kubelet. Pods in Burstable QoS can now use a bounded amount of swap; Guaranteed and high‑priority Pods don’t touch swap. This gives you a gentler failure mode during memory spikes without turning your cluster into a paging festival. (kubernetes.io)

Caveats:


4) Practical security wins: AppArmor fields and selector‑aware authorization

Two security changes are immediately useful:

What to do:


5) Small things that remove big headaches: KYAML output and cgroup driver autodetect

Two developer‑experience fixes worth adopting:

Example:

export KUBECTL_KYAML=true
kubectl get deploy -n default -o kyaml

How to try this safely


Closing thoughts

Kubernetes 1.34 isn’t about one flashy feature; it’s about smoother, more predictable operations and clearer visibility. Snapshotted, streamed reads make the control plane more resilient. Tracing creates a shared truth for debugging across API server, kubelet, and your container runtime. LimitedSwap gives you another tool for handling memory spikes, with reasonable guardrails. And the smaller items—KYAML and cgroup driver autodetect—chip away at real‑world paper cuts.

Pick one or two of these, try them in a lab, and capture before/after metrics. You’ll likely keep them when you move to production. (kubernetes.io)

If you want a checklist tailored to your environment (controller mix, workloads, or managed provider), tell me a bit about your cluster and I’ll help you draft one.