Services and Networking Basics
Expose your applications using Kubernetes Services. Understand ClusterIP, NodePort, and LoadBalancer service types.
Services and Networking Basics
In the Previous Tutorial, we learned how Deployments keep our Pods alive. But there's a problem — how do you actually talk to those Pods?
Pods come and go. They get new IP addresses each time they restart, like someone who keeps changing their phone number. You can't rely on Pod IPs. Services solve this problem — they provide a stable endpoint that routes traffic to the right Pods, no matter how many times those Pods restart or scale.
Think of a Service as a permanent phone number that always reaches the right person, even if they change offices every day.
The Problem
Okay, but why can't I just use the Pod's IP address?
Let me show you why. Imagine you have 3 nginx Pods:
nginx-abc123 → 10.244.0.5
nginx-def456 → 10.244.0.6
nginx-ghi789 → 10.244.0.7
Tomorrow, one crashes and gets replaced:
nginx-abc123 → 10.244.0.5
nginx-def456 → 10.244.0.6
nginx-xyz999 → 10.244.0.12 ← New Pod, new IP
Any client hardcoded to 10.244.0.7 is now broken. And how do you load balance across all three? You'd have to maintain a list of IPs and update it every time... sounds like a nightmare, right?
Services fix both problems. Let's see how.
What is a Service?
A Service is an abstraction that gives you three superpowers:
- Provides a stable IP address and DNS name
- Load balances traffic across matching Pods
- Discovers Pods using label selectors
┌─────────────┐
│ Service │
│ 10.96.0.100 │
└──────┬──────┘
│
┌────────────┼────────────┐
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Pod 1 │ │ Pod 2 │ │ Pod 3 │
│10.244.x│ │10.244.y│ │10.244.z│
└────────┘ └────────┘ └────────┘
Clients talk to the Service IP. The Service routes to healthy Pods. Simple and elegant.
Setup: Create a Deployment
First, let's create something to expose. We need Pods before we can put a Service in front of them.
Create nginx-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Apply it:
kubectl apply -f nginx-deployment.yaml
Verify Pods are running:
kubectl get pods -l app=nginx
Service Types
Kubernetes has four Service types. Think of them like different access levels at a building:
1. ClusterIP (Default) — "Employees Only"
Internal only. Accessible within the cluster. Not reachable from outside. Like an internal phone extension — works inside the office, but the outside world can't dial it.
Create nginx-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- port: 80 # Service port
targetPort: 80 # Pod port
Apply it:
kubectl apply -f nginx-service.yaml
Check the Service:
kubectl get services
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
nginx-service ClusterIP 10.96.45.123 <none> 80/TCP 10s
The Service has IP 10.96.45.123. This IP is stable — it won't change no matter how many times Pods restart. That's the whole point!
Test ClusterIP
But wait, if it's internal only, how do I test it?
From inside the cluster! We create a temporary throwaway Pod and make the request from there:
kubectl run test-pod --image=busybox --rm -it --restart=Never -- wget -qO- nginx-service
You'll see nginx's HTML response. The test-pod reached nginx-service, which routed to one of the nginx Pods. Magic!
DNS
Here's something really cool — Kubernetes automatically creates DNS entries for Services. You don't need to remember IP addresses!
nginx-service— works in the same namespacenginx-service.default— explicit namespacenginx-service.default.svc.cluster.local— fully qualified
kubectl run test-pod --image=busybox --rm -it --restart=Never -- nslookup nginx-service
Output:
Name: nginx-service
Address 1: 10.96.45.123 nginx-service.default.svc.cluster.local
2. NodePort — "Side Door Access"
Exposes the Service on a port on every node. External traffic can reach it via <NodeIP>:<NodePort>. It's not fancy, but it gets the job done for development and testing.
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30080 # Optional: specify port (30000-32767)
Apply it:
kubectl apply -f nginx-nodeport.yaml
Check:
kubectl get service nginx-nodeport
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-nodeport NodePort 10.96.78.90 <none> 80:30080/TCP 5s
The Service is accessible on port 30080 of any node.
Access with Minikube
Get the URL:
minikube service nginx-nodeport --url
Output:
http://192.168.49.2:30080
Open that in a browser, or:
curl $(minikube service nginx-nodeport --url)
You'll see the nginx welcome page. Boom — your app is accessible from outside the cluster!
3. LoadBalancer — "VIP Entrance"
This is the real deal for production. Creates an external load balancer (in cloud environments). The cloud provider provisions a public IP for you.
apiVersion: v1
kind: Service
metadata:
name: nginx-lb
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- port: 80
targetPort: 80
In AWS/GCP/Azure, this creates an actual cloud load balancer with a public IP. Pretty sweet.
But wait, I'm using Minikube. It doesn't have a real cloud load balancer!
You're right! In Minikube, the External IP stays stuck in "Pending" unless you use a trick called minikube tunnel.
Minikube Tunnel
Run in a separate terminal:
minikube tunnel
Keep it running. Now check the Service:
kubectl get service nginx-lb
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-lb LoadBalancer 10.96.12.34 127.0.0.1 80:31234/TCP 1m
Access at http://127.0.0.1:80. Nice!
4. ExternalName — "Just a Redirect"
Maps a Service to an external DNS name. Rarely used, mostly for migration scenarios when you're moving things into Kubernetes gradually.
apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
type: ExternalName
externalName: database.example.com
Pods calling external-db get redirected to database.example.com.
Service Discovery
How do Pods find Services? Two ways:
Environment Variables
Kubernetes automatically injects Service info into every Pod as environment variables. No configuration needed:
kubectl exec <pod-name> -- env | grep NGINX
Output:
NGINX_SERVICE_SERVICE_HOST=10.96.45.123
NGINX_SERVICE_SERVICE_PORT=80
DNS (Recommended)
DNS is the way to go. Use DNS names instead of environment variables — they're more flexible and work for Services created after the Pod.
# From any Pod in the same namespace
curl nginx-service:80
# From a different namespace
curl nginx-service.default:80
Much cleaner. This is how the pros do it.
Endpoints
How does a Service know which Pods to route to?
Behind the scenes, Services don't route to Pods directly — they route to Endpoints. Kubernetes automatically creates and updates Endpoints for each Service.
kubectl get endpoints nginx-service
Output:
NAME ENDPOINTS AGE
nginx-service 10.244.0.5:80,10.244.0.6:80,10.244.0.7:80 5m
These are the actual Pod IPs. When Pods scale or restart, Endpoints update automatically. You don't have to do anything — Kubernetes handles it all.
Sessions and Sticky Sessions
By default, Services use round-robin — each request might hit a different Pod. It's like calling customer support and getting a different agent every time.
But what if you need the same client to always hit the same Pod? That's called session affinity (or "sticky sessions"):
apiVersion: v1
kind: Service
metadata:
name: nginx-sticky
spec:
type: ClusterIP
selector:
app: nginx
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 3600
ports:
- port: 80
targetPort: 80
Now requests from the same IP go to the same Pod for 1 hour. Handy for apps that store sessions in memory.
Headless Services
Sometimes you don't want load balancing at all. You want to discover all Pod IPs directly and do your own thing. That's what headless services are for — set clusterIP: None:
apiVersion: v1
kind: Service
metadata:
name: nginx-headless
spec:
clusterIP: None
selector:
app: nginx
ports:
- port: 80
targetPort: 80
DNS returns all Pod IPs instead of a single Service IP:
kubectl run test --image=busybox --rm -it --restart=Never -- nslookup nginx-headless
Name: nginx-headless
Address 1: 10.244.0.5 ...
Address 2: 10.244.0.6 ...
Address 3: 10.244.0.7 ...
Used for stateful applications (databases) where clients need to connect to specific Pods. You'll see this in action when we learn about StatefulSets.
Quick Reference
| Type | Use Case | Accessible From |
|---|---|---|
| ClusterIP | Internal services | Within cluster only |
| NodePort | Development, simple external access | <NodeIP>:30000-32767 |
| LoadBalancer | Production external access | Public IP from cloud provider |
| ExternalName | External service alias | N/A (DNS redirect) |
Clean Up
kubectl delete deployment nginx
kubectl delete service nginx-service nginx-nodeport nginx-lb 2>/dev/null
What's Next?
Awesome — you can deploy apps AND expose them to the world! But here's the thing: hardcoding configuration inside your container images is a terrible idea. Every time you change a setting, you'd have to rebuild the whole image. Yuck.
In the next tutorial, you'll learn about ConfigMaps — a way to inject configuration into Pods without rebuilding images. Because smart developers separate code from configuration. Let's go!