on
Kubernetes 1.34 in Practice: Five Changes You Can Adopt Now
Kubernetes 1.34 landed on August 27, 2025, and the first patch (1.34.1) followed on September 9. The release includes 58 enhancements, with 23 graduating to stable—plenty of practical wins for day‑2 operations. If you’re deciding what to try first, this guide focuses on five changes that are easy to understand, test, and gain value from quickly. (kubernetes.io)
Why this release matters
A few themes stand out in 1.34:
- The control plane handles large reads more predictably.
- Traces are first‑class for both the API server and kubelet.
- Linux swap support is stable (with sensible guardrails).
- Security and auth see pragmatic upgrades.
- Small quality‑of‑life improvements reduce cluster foot‑guns.
Below, I’ll explain what each change does, why you should care, and how to try it safely.
1) The control plane breathes easier: snapshots + streaming lists
If you’ve ever watched your API server memory spike during a big list operation, this one’s for you. In 1.34, the “snapshottable API server cache” went beta and is enabled by default. Combined with earlier work—consistent reads from the API server cache (1.31) and streaming list responses (1.33)—the control plane can now serve most read requests directly from memory and stream them out without building huge buffers. That means fewer surprises under load and less pressure on etcd. (kubernetes.io)
What to do:
- Measure: watch apiserver memory, etcd CPU, and request latencies during heavy kubectl or controller list operations. You should see flatter curves. (kubernetes.io)
- Keep your controllers current: controllers that paginate or rely on historical resourceVersion reads benefit most; make sure you run recent controller versions that follow watch/list best practices. (kubernetes.io)
There’s nothing to turn on here—just upgrade and enjoy the smoother ride. (kubernetes.io)
2) End‑to‑end traces you can actually use (kube‑apiserver + kubelet)
Kubernetes components have supported tracing for some time, but 1.34 promotes tracing for the API server and kubelet to stable, with clean, versioned configuration. Both components export OpenTelemetry (OTLP) traces and default to an OTLP collector at localhost:4317 if you don’t specify an endpoint. Even better, kubelet propagates trace context to the container runtime (for example containerd or CRI‑O), so you can see a Pod’s lifecycle across control plane, node, and runtime in a single trace. (kubernetes.io)
How to try it:
- Point both components at your collector (Tempo, Jaeger, OTEL Collector):
# kube-apiserver (config file referenced with --tracing-config-file)
apiVersion: apiserver.config.k8s.io/v1
kind: TracingConfiguration
# endpoint defaults to localhost:4317 if omitted
samplingRatePerMillion: 5000
# kubelet (KubeletConfiguration snippet)
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
tracing:
# endpoint defaults to localhost:4317 if omitted
samplingRatePerMillion: 5000
- Start with a low sample rate (e.g., 1 in 200) and increase as you gain confidence; exporting spans has a small but real overhead. (kubernetes.io)
Once enabled, you can follow a request end‑to‑end: admission webhooks, etcd calls, and node‑level CRI operations appear as linked spans with shared context. It’s a big quality‑of‑life improvement for debugging slow starts and odd node behavior. (kubernetes.io)
3) Linux swap goes stable—use it carefully
Swap support for Linux nodes has been a long journey in Kubernetes. With 1.34, node swap is stable with a “LimitedSwap” mode that keeps behavior predictable: only Burstable QoS Pods can use swap, and the amount allowed is bounded automatically based on each container’s memory request and the node’s capacity. Kubernetes still defaults to NoSwap, so you opt in per node. (kubernetes.io)
How to enable on a Linux node (cgroup v2 required): 1) Provision swap on the host (file or partition), ensure it’s active at boot. 2) Configure kubelet:
# KubeletConfiguration
failSwapOn: false
memorySwap:
swapBehavior: LimitedSwap
3) Restart kubelet. Pods in Burstable QoS can now use a bounded amount of swap; Guaranteed and high‑priority Pods don’t touch swap. This gives you a gentler failure mode during memory spikes without turning your cluster into a paging festival. (kubernetes.io)
Caveats:
- This is Linux‑only; ensure the node runs cgroup v2. (v1-33.docs.kubernetes.io)
- Measure before/after: swap can help with spiky footprints, but it’s still slower than RAM. Start small and track p99 latencies. (kubernetes.io)
4) Practical security wins: AppArmor fields and selector‑aware authorization
Two security changes are immediately useful:
-
AppArmor with first‑class fields: If you’ve been using the old AppArmor annotations, migrate to the Pod/Container securityContext fields. AppArmor support has matured, and the 1.34 cycle tightens the migration: the API server warns on annotation use and treats the fields as canonical, reducing ambiguity for policy tools. Use runtime/default or a localhost/ profile via securityContext instead of annotations. (kubernetes.io)
-
Authorization with field/label selectors: You can now write authorizer policies (including webhook authorizers and the node authorizer) that require specific field or label selectors on list/watch/deletecollection requests. For example, you can allow “list Pods only on this node” by requiring spec.nodeName in the request’s fieldSelector. This enables least‑privilege patterns that were awkward or impossible before. (kep.k8s.io)
What to do:
- Update admission/policy controllers to read AppArmor from securityContext, not annotations.
- If you run a custom authorizer, add rules that depend on selectors where it sharpens least privilege (per‑node kubelet permissions, tenant label scoping, and so on). (kep.k8s.io)
5) Small things that remove big headaches: KYAML output and cgroup driver autodetect
Two developer‑experience fixes worth adopting:
- KYAML output in kubectl (alpha): kubectl can emit KYAML, a constrained YAML dialect designed to avoid gotchas like implicit typing. Set KUBECTL_KYAML=true and try -o kyaml to generate safer manifests and machine‑friendly output. It’s alpha, but KYAML is a subset that’s always valid YAML, so it’s low‑risk to experiment with. (kubernetes.io)
Example:
export KUBECTL_KYAML=true
kubectl get deploy -n default -o kyaml
- Kubelet learns its cgroup driver from the runtime (GA): If you’ve ever mismatched systemd vs cgroupfs and chased weird kubelet behavior, this helps. With 1.34 the kubelet can discover the correct cgroup driver from your CRI implementation, which removes a common class of configuration drift. (kubernetes.io)
How to try this safely
-
Lab first, then prod: Spin up a small 1.34.x cluster and validate the two big themes for your workloads: API server smoothness under list/watch load, and tracing through your collector. The latest release info is on the Kubernetes site; as of September 9, 1.34.1 is current. (kubernetes.io)
-
Managed Kubernetes timelines vary: Providers gate new minors by channel. For example, in early August the GKE Regular channel defaulted to 1.33.2 and rolls out changes gradually across zones. Expect 1.34 to appear in faster channels first and become defaults later. Check your provider’s release notes. (cloud.google.com)
-
Guardrails for swap: Start with a small test node pool with LimitedSwap; keep latency SLOs and eviction behavior on your radar; verify only Burstable Pods are using swap as intended. (kubernetes.io)
-
Trace only what you need: Begin with a conservative sampling rate and a component or two (kube‑apiserver and kubelet). Verify end‑to‑end span stitching before dialing up sampling. (kubernetes.io)
Closing thoughts
Kubernetes 1.34 isn’t about one flashy feature; it’s about smoother, more predictable operations and clearer visibility. Snapshotted, streamed reads make the control plane more resilient. Tracing creates a shared truth for debugging across API server, kubelet, and your container runtime. LimitedSwap gives you another tool for handling memory spikes, with reasonable guardrails. And the smaller items—KYAML and cgroup driver autodetect—chip away at real‑world paper cuts.
Pick one or two of these, try them in a lab, and capture before/after metrics. You’ll likely keep them when you move to production. (kubernetes.io)
If you want a checklist tailored to your environment (controller mix, workloads, or managed provider), tell me a bit about your cluster and I’ll help you draft one.