Network Policies
Control pod-to-pod traffic with Kubernetes Network Policies for enhanced security.
Network Policies
In the previous tutorial, we set up Ingress Controllers to get traffic into our cluster. But here's a scary thought: by default, all pods can communicate with all other pods. Every single one. Your frontend can talk to your database directly. Your test pods can reach production. It's like an office building with no doors — anyone can walk anywhere.
Network Policies are your firewall rules. They let you control which pods can talk to each other, and that's essential for security.
How Network Policies Work
Network Policies select pods using labels (remember those?) and specify what traffic is allowed — both incoming (ingress) and outgoing (egress).
Pod A ──→ Network Policy ──→ Pod B
│
└── Allow/Deny based on rules
Think of it like a bouncer with a clipboard. If you're not on the list, you're not getting in.
Important caveat: Network Policies require a CNI plugin that supports them:
- Calico ✓
- Cilium ✓
- Weave Net ✓
- Flannel ✗ (doesn't support policies — it's the friendly neighbor who lets everyone in)
If your CNI doesn't support policies, you can create them all day long and nothing will happen. It's like putting up a "No Entry" sign that nobody reads.
Enable Network Policies in Minikube
Start Minikube with Calico:
minikube start --cni=calico
Or enable on existing cluster:
minikube start --network-plugin=cni
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Verify Calico is running:
kubectl get pods -n kube-system -l k8s-app=calico-node
Set Up Test Environment
Let's create a little playground with different pods representing a typical architecture:
# test-setup.yaml
apiVersion: v1
kind: Namespace
metadata:
name: netpol-demo
---
apiVersion: v1
kind: Pod
metadata:
name: web
namespace: netpol-demo
labels:
app: web
tier: frontend
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Pod
metadata:
name: api
namespace: netpol-demo
labels:
app: api
tier: backend
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Pod
metadata:
name: db
namespace: netpol-demo
labels:
app: db
tier: database
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Pod
metadata:
name: client
namespace: netpol-demo
labels:
app: client
spec:
containers:
- name: busybox
image: busybox
command: ["sleep", "3600"]
Apply:
kubectl apply -f test-setup.yaml
Test connectivity (everything should work right now — it's a free-for-all):
# Get pod IPs
kubectl get pods -n netpol-demo -o wide
# Test from client to web
kubectl exec -n netpol-demo client -- wget -qO- --timeout=2 http://<web-ip>
Works? Good. Now let's lock it down.
Default Deny All
The first rule of Network Policies: start with zero trust. Deny everything, then allow only what's needed. Like a club that starts with the rope up and only lets specific people in.
Deny All Ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: netpol-demo
spec:
podSelector: {} # Selects ALL pods in namespace
policyTypes:
- Ingress
Empty podSelector means "all pods in this namespace". No ingress rules means "deny all incoming traffic". Short, sweet, and terrifying.
Apply it:
kubectl apply -f default-deny-ingress.yaml
Test — this should now timeout (that's what we want!):
kubectl exec -n netpol-demo client -- wget -qO- --timeout=2 http://<web-ip>
# wget: download timed out
Boom. Locked down. Nobody can reach the web pod anymore. Now let's selectively open the doors.
Deny All Egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespace: netpol-demo
spec:
podSelector: {}
policyTypes:
- Egress
Deny All (Both Directions)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: netpol-demo
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Allow Specific Traffic
After denying everything, let's selectively allow traffic. This is where it gets fun.
Allow Ingress from Specific Pods
Let's say only the client pod should be able to reach the web pod:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-client-to-web
namespace: netpol-demo
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: client
ports:
- protocol: TCP
port: 80
This policy says:
- "Hey web pods, you can receive traffic..."
- "...but ONLY from pods labeled
app: client..." - "...and ONLY on TCP port 80."
Everyone else? Talk to the hand.
Apply:
kubectl apply -f allow-client-to-web.yaml
Test:
# This works now (client is allowed)
kubectl exec -n netpol-demo client -- wget -qO- --timeout=2 http://<web-ip>
# This still fails (api pod doesn't have app: client label — not on the list!)
kubectl exec -n netpol-demo api -- wget -qO- --timeout=2 http://<web-ip>
How cool is that? Precise traffic control with just a few lines of YAML.
Allow Egress to Specific Destinations
Let's allow the api pods to reach the db pods (because APIs need databases):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-to-db
namespace: netpol-demo
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: db
ports:
- protocol: TCP
port: 80
Namespace-Based Rules
"What if I want to allow traffic from pods in a different namespace?"
Great question! You can use namespace selectors for that:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-monitoring
namespace: netpol-demo
spec:
podSelector:
matchLabels:
app: api
ingress:
- from:
- namespaceSelector:
matchLabels:
name: monitoring
First, label the namespace:
kubectl label namespace monitoring name=monitoring
Combine Pod and Namespace Selectors
Allow from specific pods in specific namespaces — for when you need precision:
ingress:
- from:
- namespaceSelector:
matchLabels:
name: production
podSelector:
matchLabels:
role: frontend
This is the trickiest part of Network Policies, so pay attention: when namespaceSelector and podSelector are in the same list item (same -), it's AND logic. Separate list items are OR logic.
# AND: pods matching BOTH conditions (must be in prod namespace AND labeled web)
- from:
- namespaceSelector:
matchLabels:
env: prod
podSelector:
matchLabels:
app: web
# OR: pods matching EITHER condition (anything from prod namespace, OR any web pod)
- from:
- namespaceSelector:
matchLabels:
env: prod
- podSelector:
matchLabels:
app: web
See the difference? One dash vs two dashes. Subtle, but it changes everything. Get this wrong because you've been staring at YAML for too long and you'll accidentally open your database to the world. No pressure.
IP Block Rules
Allow or block specific IP ranges:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-api
namespace: netpol-demo
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/8
except:
- 10.0.1.0/24
ports:
- protocol: TCP
port: 443
Allows egress to 10.0.0.0/8 except 10.0.1.0/24.
Allow DNS
"I denied all egress and now nothing works, not even DNS!"
Yeah, that's the gotcha everyone hits. When you deny egress, pods can't resolve DNS names. Which means they can't reach my-service.default.svc.cluster.local, which means... everything breaks.
Always allow DNS when you have egress deny rules:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: netpol-demo
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
This is probably the most important policy to remember. Without it, your pods will be able to talk to IP addresses directly but can't resolve any hostnames. And since everything in Kubernetes uses DNS names... yeah.
Real-World Example: Three-Tier App
Alright, let's build something real. A classic web → api → db architecture with proper network policies. This is what production looks like:
# 1. Default deny all in the namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: myapp
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
# 2. Allow DNS for all pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: myapp
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- ports:
- protocol: UDP
port: 53
---
# 3. Web: allow ingress from anywhere, egress to api
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-policy
namespace: myapp
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Ingress
- Egress
ingress:
- ports:
- port: 80
egress:
- to:
- podSelector:
matchLabels:
tier: backend
ports:
- port: 8080
---
# 4. API: allow ingress from web, egress to db
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-policy
namespace: myapp
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
tier: database
ports:
- port: 5432
---
# 5. DB: allow ingress only from api
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-policy
namespace: myapp
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- port: 5432
Traffic flow:
Internet → Web (port 80) → API (port 8080) → DB (port 5432)
↓ ↓ ↓
Frontend Backend Database
Each tier can only talk to its immediate neighbor. The frontend can't skip the API and talk directly to the database. The database can't reach the internet. It's beautiful, organized, and secure.
Viewing Network Policies
Let's see what policies are active:
kubectl get networkpolicies -n netpol-demo
Describe a policy:
kubectl describe networkpolicy allow-client-to-web -n netpol-demo
Testing Network Policies
Always test your policies! Don't just deploy and pray.
Quick Connectivity Test
# From one pod to another
kubectl exec -n netpol-demo client -- wget -qO- --timeout=2 http://<target-ip>
# Check if port is reachable
kubectl exec -n netpol-demo client -- nc -zv <target-ip> 80
Debug with a Test Pod
For serious debugging, spin up a pod with networking tools pre-installed:
kubectl run debug --rm -it --image=nicolaka/netshoot -n netpol-demo -- /bin/bash
# Inside the pod
curl http://<target-ip>
nslookup kubernetes
Troubleshooting
When your policies aren't working the way you expect (and they won't, at least the first time):
Policy Not Working
-
Check CNI supports policies:
kubectl get pods -n kube-system | grep -E 'calico|cilium|weave' -
Verify policy is applied:
kubectl get networkpolicies -n <namespace> -
Check label selectors match:
kubectl get pods -n <namespace> --show-labels -
Policy is additive: Multiple policies are OR'd together. If any policy allows traffic, it's allowed. You can't create a "deny" policy that overrides an "allow" policy. This trips people up constantly.
Common Mistakes
- Forgetting DNS egress rules (the #1 mistake, ask me how I know)
- Wrong namespace for policy (policy must be in the same namespace as the pods it targets)
- Label selector typos (
apps: webinstead ofapp: web... good luck finding that) - Mixing up AND/OR logic (the one-dash vs two-dash thing we covered earlier)
Clean Up
kubectl delete namespace netpol-demo
What's Next?
Nice work! You now know how to lock down pod-to-pod traffic like a security pro. Default deny, selective allow, namespace isolation — your cluster just got a whole lot more secure.
But Network Policies are just the beginning of traffic management. What if you need more advanced features like traffic splitting, mutual TLS, retries, and circuit breaking? That's where Service Mesh comes in. Let's go!