on
Carbon‑Aware CI/CD: Turning Real‑Time Grid Emissions Into Greener Deploys
Sustainability in DevOps isn’t just about reporting anymore. Over the past few cycles, the building blocks for practical, day‑to‑day emissions reduction have matured: we have an ISO standard to measure software emissions, widely used open tooling to fetch carbon‑intensity data, and a new standard that pushes cloud providers toward real‑time transparency. If you run CI/CD and Kubernetes today, you can start cutting carbon without slowing your teams down.
This article shows a simple, production‑friendly way to make your pipelines carbon‑aware: measure a baseline, add an emissions “gate” to CI, pick greener regions for batch jobs, and close the loop with runtime telemetry. Along the way, I’ll point to the standards and tools that make this work credible and repeatable.
What’s new (and why it matters)
-
Software Carbon Intensity (SCI) is now an ISO standard. SCI provides a consistent way to calculate a software system’s rate of emissions so teams can set targets and compare options—crucial if you want to prove your CI/CD changes helped. It was published as ISO/IEC 21031:2024 in March 2024. (iso.org)
-
The Carbon Aware SDK has “graduated” under the Green Software Foundation (GSF). It’s a stable Web API and CLI that aggregates grid emissions data from sources like Electricity Maps and WattTime, so your software can choose cleaner times and places to run. The project reached graduated status on April 16, 2024; a v1.4 update moved it to .NET 8 for better metrics support. (greensoftware.foundation)
-
Real‑time cloud emissions data is coming. GSF ratified the Real Time Energy and Carbon Standard for Cloud Providers in April 2025, pushing toward minute‑level energy data and near‑real‑time carbon intensity, instead of the delayed monthly reports many cloud customers see today. This matters because better data makes carbon‑aware automation far more precise. (greensoftware.foundation)
-
The “SCI for Web” effort kicked off this summer to tailor SCI to websites and web apps (with W3C and GSF involved). Even if you’re not in the web team, this shows the direction: domain‑specific, auditable sustainability metrics that product teams can ship against. (thegreenwebfoundation.org)
A practical workflow you can ship this sprint
Here’s a minimal, low‑risk path to carbon‑aware DevOps. It doesn’t require rewriting apps or changing orchestrators—just a few smart checks in your pipeline and some telemetry.
1) Establish a baseline you can defend
-
Use SCI for the before/after. Pick a functional unit that fits your service (e.g., “per 1k API requests” or “per model training epoch”), capture energy use and the carbon intensity of the electricity consumed, and record your SCI score. The point is consistency across changes, not perfection on day one. (iso.org)
-
Add cluster‑level energy telemetry. On Kubernetes, the Kepler project exports estimated power usage per container/pod using eBPF and Prometheus—handy for seeing which workloads move the needle. Treat numbers as directional: a 2025 evaluation found accuracy limitations and proposed alternatives like KubeWatt. Use Kepler for trends, not financial‑grade accounting. (cncf.io)
Deliverable for your backlog: a dashboard with SCI by pipeline and Kepler power by workload.
2) Add a carbon‑aware “gate” to CI
Goal: If a greener 15‑ to 60‑minute window is imminent, delay a non‑urgent job (e.g., nightly builds, integration tests, data pipelines) until then. If not, run now. This is deadline‑aware and developer‑friendly.
You can use the Carbon Aware SDK as a CLI (or Web API) inside your runner. The CLI exposes emissions forecasts and returns the best window for a given workload duration. (carbon-aware-sdk.greensoftware.foundation)
Example: GitHub Actions job that waits (up to 45 minutes) for a cleaner window in a chosen region using Electricity Maps as the data source.
name: carbon-aware-build
on:
workflow_dispatch:
schedule:
- cron: "0 * * * *" # hourly
jobs:
build:
runs-on: ubuntu-latest
env:
DataSources__EmissionsDataSource: ElectricityMaps
DataSources__ForecastDataSource: ElectricityMaps
DataSources__Configurations__ElectricityMaps__Type: ElectricityMaps
DataSources__Configurations__ElectricityMaps__APITokenHeader: auth-token
DataSources__Configurations__ElectricityMaps__APIToken: $
steps:
- uses: actions/checkout@v4
- uses: actions/setup-dotnet@v4
with:
dotnet-version: "8.0.x"
- name: Build Carbon Aware CLI
run: |
git clone https://github.com/Green-Software-Foundation/carbon-aware-sdk.git
dotnet publish ./carbon-aware-sdk/src/CarbonAware.CLI/src/CarbonAware.CLI.csproj -c Release -o ./caw
- name: Wait for greener window (max 45 min)
run: |
sudo apt-get update && sudo apt-get install -y jq
FORECAST=$(./caw/caw emissions-forecasts -l eastus -w 30)
BEST_TS=$(echo "$FORECAST" | jq -r '.[0].optimalDataPoint.timestamp')
NOW=$(date -u +%s)
BEST_EPOCH=$(date -d "$BEST_TS" +%s)
DELAY=$((BEST_EPOCH - NOW))
MAX_DELAY=$((45*60))
if [ $DELAY -gt 0 ] && [ $DELAY -le $MAX_DELAY ]; then
echo "Sleeping $DELAY seconds until greener window at $BEST_TS (eastus)…"
sleep $DELAY
else
echo "No greener window within 45 minutes—continuing now."
fi
- name: Run tests
run: |
./scripts/run-tests.sh
Notes:
- The CLI can also list regions and return averages or best windows over a time range; see docs for options like –best and –average. (carbon-aware-sdk.greensoftware.foundation)
- If you prefer, there’s a GitHub Action built atop the CLI; check the SDK’s docs for details. (carbon-aware-sdk.greensoftware.foundation)
3) Let your pipeline pick the greenest region for batch jobs
If your workload can run in multiple cloud regions, query forecasts for each and choose the one with the lowest expected intensity for your job’s window.
Example snippet (bash + jq):
FORECAST=$(./caw/caw emissions-forecasts -l eastus -l westus2 -w 20)
BEST_REGION=$(echo "$FORECAST" | jq -r 'map({region:.location, val:.optimalDataPoint.value}) | sort_by(.val) | .[0].region')
echo "Deploying batch job to $BEST_REGION"
# export to pipeline env if needed
This is the “location shifting” half of carbon‑aware computing; the previous step handled “time shifting.” The SDK abstracts over data providers (e.g., Electricity Maps, WattTime), so you can swap sources without changing your pipeline. (greensoftware.foundation)
4) Close the loop at runtime
-
Scale to zero (and back) for event‑driven workloads so you’re not burning watts while idle. This is standard Kubernetes hygiene that also saves carbon.
-
Keep Kepler metrics in Prometheus/Grafana and track per‑workload energy trends as you roll out carbon‑aware gating. If you need high‑accuracy measurements for a subset of nodes, validate with hardware counters and treat findings as calibration for your estimators. (cncf.io)
-
Record the forecast chosen and the delay applied as build metadata. That audit trail will matter when you publish improvements against your SCI baseline. (iso.org)
What kind of impact should you expect?
Published research on carbon‑aware scheduling suggests impressive headroom when workloads are flexible:
-
For geo‑distributed, latency‑sensitive web services, the CASPER system shows that balancing SLOs with carbon‑aware region selection can cut emissions by up to 70% compared to baselines, with no latency degradation in their evaluation. Think of this as “smart load balancing” guided by carbon intensity. (arxiv.org)
-
Learning‑augmented autoscaling approaches (e.g., LACS) demonstrate around 32% lower carbon footprint versus carbon‑agnostic execution while still hitting deadlines—useful intuition for deadline‑aware CI and data pipelines. (arxiv.org)
-
Scientific workflow studies find that simply time‑shifting long, interruptible workflows can reduce emissions by over 80% when you have generous scheduling flexibility. Your mileage will vary, but it’s a healthy upper bound to keep in mind for batch. (arxiv.org)
The takeaway: start with deadline‑tolerant jobs, keep a modest max delay (e.g., ≤45 minutes), and iterate from there.
Guardrails that keep developers happy
-
Set a hard cap on delay per job class (e.g., 15 minutes for integration tests, 45 for nightly builds). Make the cap visible in logs.
-
Never gate urgent or user‑facing production deploys. Restrict carbon‑aware gating to non‑urgent, batch, or off‑peak jobs first.
-
Provide a “break glass” label (e.g., CI variable OVERRIDE_GREENER_WINDOW=true) to skip waiting during incidents.
-
Make decisions explainable. Log the chosen region, the forecast intensity, and the delay applied. Include links to the underlying forecast.
-
Measure and celebrate wins. Track before/after SCI scores per pipeline; set targets per quarter. (iso.org)
What to watch next
-
Cloud‑native support for real‑time emissions data. As providers align to the Real Time Cloud standard, you’ll be able to replace public proxies with provider‑verified, high‑frequency data—improving both accuracy and auditability. (greensoftware.foundation)
-
Domain‑specific SCI. The “SCI for Web” initiative should make sustainability work more actionable for front‑end and platform teams that ship web experiences. (thegreenwebfoundation.org)
-
Better energy observability in Kubernetes. Kepler is a good start; expect continued scrutiny and alternatives that improve accuracy at container granularity. (cncf.io)
Wrap‑up
Sustainable DevOps doesn’t need a moonshot. With an ISO‑standard measurement (SCI), a mature carbon‑intensity SDK you can call from any pipeline, and a push toward real‑time cloud emissions data, teams can make pragmatic, incremental changes that add up. Start by gating non‑urgent jobs to greener windows, teach your pipeline to prefer cleaner regions, and measure the effect with SCI and runtime telemetry. You’ll likely see meaningful carbon reductions without sacrificing delivery speed—and you’ll have a defensible story when you share results with your organization. (iso.org)