on
Kubernetes for Beginners: Pods, Deployments, and Services — how to expose your app (ClusterIP, NodePort, LoadBalancer, and MetalLB)
Kubernetes can feel like a small city with its own rules: Pods are apartments, Deployments are the property managers, and Services are the roads and bridges that let people reach the apartments. For a beginner, understanding how these pieces work together — especially how Services expose a Deployment to other parts of the cluster or the outside world — is one of the most useful skills you can learn.
This article explains Pods, Deployments, and Services in plain language, shows a minimal example, and clarifies the differences between ClusterIP, NodePort, and LoadBalancer — including what to do when you’re running on bare metal (MetalLB).
Pods — the smallest unit
A Pod is the smallest deployable unit in Kubernetes: one or more containers that share network and storage and run together on the same node. Think of a Pod like a single apartment where roommates (containers) share the same phone line (IP address) and filesystem. Most apps use one container per Pod; sometimes you’ll add a “sidecar” container for logs, proxies, or other helper tasks. Pods are ephemeral — controllers create and replace them as needed, so don’t rely on any single Pod’s name or lifespan. (kubernetes.io)
Key takeaways:
- A Pod has a single IP and shared storage volumes for its containers.
- Pods are created by higher-level controllers (like Deployments); you generally don’t manage long-lived Pods manually.
Deployments — reliable, updatable replicas
A Deployment is the controller you’ll use to run stateless apps. It declares the desired state — e.g., “3 replicas of nginx” — and the Deployment controller ensures the cluster matches that state. When you update the Pod template, the Deployment performs a controlled rollout (creating a new ReplicaSet and gradually replacing old Pods). This gives you rolling updates, rollbacks, and easy scaling. In our apartment-city metaphor, a Deployment is the property manager who ensures the right number of identical apartments are available and replaced safely when renovated. (kubernetes.io)
Why use a Deployment?
- Declarative updates: describe what you want, not how to change it.
- Rollouts and rollbacks: safe updates with status feedback.
- Scaling: change replica count to handle more or less traffic.
Services — stable network entry points
Deployments create Pods, but Pods are ephemeral and their IPs change. A Service provides a stable network endpoint (a virtual IP or DNS name) and routes traffic to the set of Pods matching its selector. Services decouple clients from Pod lifecycle so the frontend can always reach the backend. (kubernetes.io)
Kubernetes supports several Service types; the most common for beginners are:
- ClusterIP (default): exposes the Service on an internal cluster IP. Only reachable within the cluster.
- NodePort: exposes the Service on a static port on every Node’s IP. You can reach the Service via NodeIP:NodePort from outside the cluster.
- LoadBalancer: asks the cloud provider to provision an external load balancer and assigns an external IP to the Service. Kubernetes integrates with cloud control planes to set this up. (kubernetes.io)
Quick note: LoadBalancer doesn’t magically create networking on bare-metal clusters — it relies on an external load balancing implementation or cloud provider integration. If your cluster is on cloud VMs, the cloud usually provisions the load balancer; if you’re on bare metal you need an add-on like MetalLB to actually get a LoadBalancer IP. (kubernetes.io)
A minimal example: Deployment + Service
Here’s a tiny example that creates a Deployment of nginx and exposes it with a Service. Save the YAML to a file (e.g., app.yaml) and apply it with kubectl.
Deployment (3 replicas):
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
Service (ClusterIP, internal only):
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: ClusterIP
Commands:
- Apply: kubectl apply -f app.yaml
- Check Pods: kubectl get pods
- Check Service: kubectl get svc nginx-svc
- Describe endpoints: kubectl describe svc nginx-svc
If you want to expose to the outside world for quick testing, change type: ClusterIP to type: NodePort (or use kubectl expose). NodePort will allocate a port in the 30000–32767 range and you can reach any Node’s IP at that port. For cloud-based clusters, type: LoadBalancer will usually provision an external IP automatically. (kubernetes.io)
When LoadBalancer is “Pending” (and what to do on bare metal)
You may have seen Services stuck in the “Pending” state after creating type: LoadBalancer. That usually means Kubernetes is waiting for the cloud-provider load balancer to be provisioned — the control plane asks the cloud controller manager to create the external LB and populate the Service’s .status.loadBalancer field once done. If your cluster lacks that cloud integration (common on bare metal, DIY clusters, or some local dev setups), LoadBalancer stays pending. (kubernetes.io)
If you’re on bare metal, MetalLB is the common solution: it provides a load-balancer implementation that hands out external IPs from a configured pool and integrates with the Service API so that type: LoadBalancer works as expected. Install MetalLB and give it an address pool, and your LoadBalancer Services will receive real external IPs. MetalLB supports both L2 (layer-2) and BGP modes for different networking environments. (metallb.io)
Practical tips for beginners
- Start internal: get ClusterIP + kubectl port-forward working before exposing anything externally. That reduces blast radius while you learn.
- Use Deployments for stateless apps; use StatefulSets when your workload needs stable network IDs or storage.
- For local development, kind/minikube: NodePort or port-forward are simple; for on-prem lab clusters, consider MetalLB if you need LoadBalancer behavior.
- Observe rollouts: kubectl rollout status deployment/nginx-deploy and kubectl rollout undo if something goes wrong.
- Keep security in mind: by exposing Services externally, you increase your attack surface—use network policies, minimal permissions, and Ingress controllers with TLS for production traffic.
Where to read next
- Official Kubernetes concepts: Pods, Deployments, Services (great for reference and deeper details). (kubernetes.io)
- MetalLB docs for bare-metal LoadBalancer behavior and installation. (metallb.io)
Final analogy — plumbing and concerts
If Pods are apartments and Deployments are property managers, Services are the plumbing and electricity: they make sure every request finds a working apartment. ClusterIP is the internal wiring, NodePort exposes a fixed outlet on each building, and LoadBalancer is like the city placing a bus stop at the front door. If your “city” is custom-built (bare metal), MetalLB is the contractor you hire to actually build the bus stop.
Play around with the example manifest above — it’s the easiest way to learn. If you’d like, I can:
- Walk you through deploying the YAML step by step,
- Show a Service + Ingress example with TLS,
- Or give a short checklist for setting up MetalLB on a small cluster.
Which would you prefer next?