on
Automating Kubernetes deployments with GitHub Actions: secure OIDC authentication and protected environments
Continuous delivery to Kubernetes is great — until your CI pipeline has permanent cloud keys sitting in a secrets store. A modern pattern is to let GitHub Actions authenticate to your cloud provider using short‑lived tokens (OIDC), then apply manifests to your cluster and watch the rollout finish. That removes long‑lived secrets, gives you better auditability, and lets you gate production pushes with GitHub Environments and required reviewers.
This article walks through a practical, secure pattern for automated Kubernetes deployments with GitHub Actions:
- Use GitHub’s OIDC provider to get short‑lived cloud credentials (AWS, GCP, or Azure).
- Run kubectl (or kustomize/helm) from the workflow to apply manifests.
- Gate production with GitHub Environments and required reviewers.
- Wait for rollout completion and fail fast on broken deployments.
Why this matters (short version)
- No long‑lived cloud keys in GitHub — OIDC issues a short JWT per job that the cloud provider validates. That reduces secret sprawl and risk. (docs.github.com)
- Cloud providers offer official ways to accept GitHub OIDC tokens (AWS IAM trust, GCP Workload Identity, Azure federated credentials). Using those keeps auth centralized in the cloud provider and avoids copying service account keys. (docs.github.com)
- GitHub Environments let you require approvals for sensitive deployments and scope secrets to a specific environment. (docs.github.com)
- Kubernetes exposes rollout status so CI can wait for a deployment to be healthy (or fail the workflow if not). (kubernetes.io)
Overview of the pattern
- Configure trust between GitHub and your cloud account (OIDC).
- Create a workflow that:
- Checks out code.
- Authenticates to cloud via OIDC (cloud provider action).
- Runs kubectl/helm/kustomize to apply manifests.
- Uses kubectl rollout status (or helm’s status) to confirm success.
- Targets different GitHub Environments (staging, prod) with approvals for prod.
- Keep IAM roles / service accounts tight: least privilege for the job.
Think of OIDC as a concert hall usher: GitHub vouches for a specific show (workflow run) and the cloud only hands out a backstage pass for that performance. No one gets a permanent master key.
Example: AWS + GitHub Actions + kubectl (high level) Below is a compact workflow that demonstrates the key bits: requesting an OIDC token, using aws-actions to assume an IAM role, and then applying manifests and waiting for the rollout.
.github/workflows/deploy.yml
name: Deploy to Kubernetes
on:
push:
branches:
- main
permissions:
id-token: write # allow the job to request an OIDC token
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
environment: production # references a GitHub Environment (can require approvals)
steps:
- uses: actions/checkout@v4
- name: Configure AWS Credentials via OIDC
uses: aws-actions/configure-aws-credentials@v3
with:
role-to-assume: arn:aws:iam::123456789012:role/github-actions-deploy
aws-region: us-west-2
- name: Set up Kubectl
uses: azure/setup-kubectl@v4
with:
version: '1.27.0'
- name: Update image and apply k8s manifests
env:
IMAGE: ghcr.io/myorg/myapp:$
run: |
# Example: template or kustomize step could be here
kubectl set image deployment/myapp myapp=$IMAGE -n mynamespace
kubectl rollout status deployment/myapp -n mynamespace --timeout=3m
Notes on that snippet
- The permissions block is required so the job can request an OIDC token from GitHub. Without
id-token: writethe cloud auth steps will fail. (docs.github.com) - aws-actions/configure-aws-credentials (and similar cloud login actions) rely on the OIDC token; on AWS you create an IAM identity provider and a role with a trust policy that accepts tokens from token.actions.githubusercontent.com. That avoids storing AWS keys in GitHub secrets. (docs.github.com)
- Always pin action versions (e.g., @v3 or @v4) or use commit SHAs for maximum reproducibility.
Authenticating to other clouds (GCP, Azure)
- GCP: Use Workload Identity Federation + google-github-actions/auth to exchange the GitHub OIDC token for a GCP credential. This is preferred over uploading a JSON key file, and Google docs walk through the Workload Identity Pool/provider and service account impersonation steps. (cloud.google.com)
- Azure: Use federated credentials configured in an Azure AD app registration and use azure/login or the Azure CLI flow to acquire tokens without a client secret. GitHub’s security hardening docs list the steps for each provider. (docs.github.com)
Gate production with GitHub Environments GitHub Environments are a simple way to both store environment‑specific secrets and require manual approvals for protected environments (like production). Create an environment called production, add any required secrets (e.g., cluster kubeconfig if you use that route), and enable required reviewers so the job will pause until someone approves. This provides a clear audit trail in GitHub of who allowed the deployment. (docs.github.com)
Best practices and hardening tips
- Least privilege for cloud roles: scope the IAM role or service account to only allow the actions the workflow needs (e.g., update EKS/GKE node group role not necessary, only allow EKS access or Kubernetes API access you require). Configure OIDC trust conditions (sub, repo, environment) to limit who or what can assume the role. (docs.github.com)
- Pin action versions: avoid “latest” — CI can break when an action upgrades its runtime. Use semantic pins or commit SHAs. (Common operational hygiene.)
- Prefer server‑side kube apply variants if you want 3‑way merge behavior: use kustomize or kubectl apply with predictable manifests.
- Observe rollouts: Using kubectl rollout status (or the equivalent from helm) lets your workflow confirm the new version is healthy before it passes. If the rollout fails, fail the job and open a PR or alert. (kubernetes.io)
- Reuseable workflows: Put deploy logic into a reusable workflow that is called by specific repos; remember OIDC id-token permissions must be set where needed. (github.blog)
Debugging tips
- If OIDC role assumption fails on AWS, check the IAM trust policy and the
sub/audclaims GitHub is sending. You can temporarily add an OIDC debug action to print the token claims (don’t leak them into logs in production). Community issues have sometimes tripped users up when role names contain unexpected substrings — verify the role ARN is correct. (github.com) - If kubectl rollout hangs, describe the pods and events (kubectl describe pod) to find image pull, liveness/readiness, or CrashLoopBackOff issues. Don’t blindly increase the rollout timeout; address the root cause.
A note on reusable workflows and permissions
When calling reusable workflows (one workflow invoking another), be explicit about id-token: write on the caller if the called workflow needs to fetch the OIDC token. GitHub tightened permissions to avoid accidentally leaking OIDC tokens across workflow boundaries; set the permission explicitly at the level that requests the token. (github.blog)
When you might still need secrets OIDC removes many use cases for stored cloud keys, but some workflows still need secrets:
- Third‑party service API keys (e.g., monitoring, feature flags) — keep these in GitHub or an external secrets provider.
- Long‑lived tokens for tooling that doesn’t support federated auth — consider a vault with short TTL secrets.
Closing analogy — treat your pipeline like a sound check In live music, you don’t hand everyone the master volume control; you let a stage manager hand out temporary passcards for a single show and require a tech to check levels before going live. OIDC gives your workflow a temporary pass; GitHub Environments and rollout checks make sure the sound levels (your deployment health) are correct before the audience sees the show.
Further reading and quick references
- GitHub: OpenID Connect / security hardening docs. (docs.github.com)
- AWS: configure-aws-credentials action & OIDC guidance. (docs.github.com)
- Google Cloud: Workload Identity Federation with GitHub Actions. (cloud.google.com)
- Kubernetes: Deployments and using kubectl rollout status. (kubernetes.io)