Init Containers and Sidecars
Use init containers for setup tasks and sidecars for auxiliary functionality.
Init Containers and Sidecars
In the previous tutorial, we deployed DaemonSets to run pods on every node. Now let's zoom back into the pod itself and learn some powerful patterns.
Here's the thing — pods aren't limited to just one container. You can have multiple containers in a single pod, and they can serve different purposes:
- Init containers run before the main container starts (like the opening act before the headliner)
- Sidecars run alongside the main container throughout its lifecycle (like a copilot)
Both patterns extend functionality without touching your application code. Your app stays clean, and the pod does the heavy lifting.
Init Containers
Init containers run sequentially before any app containers start. Each must complete successfully before the next one runs. If any init container fails, the pod restarts.
┌─────────────────────────────────────────────────┐
│ Pod │
├─────────┬─────────┬─────────────────────────────┤
│ Init 1 │ Init 2 │ Main Container │
│ (done) │ (done) │ (running) │
├─────────┴─────────┴─────────────────────────────┤
│ Sequential → Parallel │
└─────────────────────────────────────────────────┘
Use Cases
- Wait for dependencies: "Don't start the app until the database is ready!"
- Setup: Clone a git repo, download files, run migrations
- Security: Fetch secrets from HashiCorp Vault
- Configuration: Generate config files from templates
Basic Init Container
The classic use case — wait for a database before starting the app:
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
initContainers:
- name: wait-for-db
image: busybox
command: ['sh', '-c', 'until nc -z postgres 5432; do echo waiting for db; sleep 2; done']
containers:
- name: app
image: myapp:v1
ports:
- containerPort: 8080
The wait-for-db init container loops until it can connect to PostgreSQL on port 5432. Only when it exits successfully (exit code 0) does the main app container start. No more race conditions where your app starts before the database is ready. We've all been there.
Multiple Init Containers
You can chain multiple setup steps — they run in order:
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
initContainers:
- name: fetch-config
image: busybox
command: ['wget', '-O', '/config/app.conf', 'http://config-server/app.conf']
volumeMounts:
- name: config
mountPath: /config
- name: run-migrations
image: myapp:v1
command: ['./migrate.sh']
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
- name: warm-cache
image: curlimages/curl
command: ['curl', '-X', 'POST', 'http://cache-service/warm']
containers:
- name: app
image: myapp:v1
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
emptyDir: {}
Order: fetch-config → run-migrations → warm-cache → app starts. Each step must succeed before the next one begins. It's like a checklist that absolutely must be completed.
Share Data Between Init and Main Containers
Use emptyDir volumes to pass data from init containers to the main container:
apiVersion: v1
kind: Pod
metadata:
name: git-sync
spec:
initContainers:
- name: clone-repo
image: alpine/git
command:
- git
- clone
- --depth=1
- https://github.com/myorg/myrepo.git
- /app
volumeMounts:
- name: app-code
mountPath: /app
containers:
- name: app
image: python:3.11
command: ['python', '/app/main.py']
volumeMounts:
- name: app-code
mountPath: /app
volumes:
- name: app-code
emptyDir: {}
Init container clones code into the shared volume; main container runs it. They're like relay runners — one hands off to the other.
Init Container Resources
"Can init containers use different resources than the main container?"
Yep! Init containers can be resource-hungry for setup, while the main container stays lean:
initContainers:
- name: setup
image: setup-tool
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "2Gi"
cpu: "1"
containers:
- name: app
image: myapp
resources:
requests:
memory: "256Mi"
cpu: "100m"
The pod's effective resource request is the max of:
- Sum of all app container requests
- Max of any single init container request
So your init container can request 1Gi of memory, but since it finishes before the main container starts, you're not wasting resources. Smart.
Sidecar Containers
Sidecars are the opposite of init containers — they run alongside the main container for the pod's entire lifecycle. Think of them as helpful roommates.
Use Cases
- Logging: Ship logs to a central system (so your app doesn't have to)
- Proxying: Service mesh proxies like Envoy
- Sync: Keep local files synced with remote repos
- Monitoring: Export metrics without touching app code
- Security: Handle TLS termination
Basic Sidecar Pattern
Here's a log shipper sidecar — probably the most common pattern:
apiVersion: v1
kind: Pod
metadata:
name: app-with-logging
spec:
containers:
- name: app
image: myapp:v1
volumeMounts:
- name: logs
mountPath: /var/log/app
- name: log-shipper
image: fluent/fluent-bit
volumeMounts:
- name: logs
mountPath: /var/log/app
readOnly: true
- name: fluent-config
mountPath: /fluent-bit/etc
volumes:
- name: logs
emptyDir: {}
- name: fluent-config
configMap:
name: fluent-bit-config
The app writes logs to /var/log/app. The sidecar reads and ships them. The app doesn't need to know or care about the logging infrastructure. Separation of concerns at its finest.
Proxy Sidecar
Route all traffic through a proxy — this is basically how service meshes work:
apiVersion: v1
kind: Pod
metadata:
name: app-with-proxy
spec:
containers:
- name: app
image: myapp:v1
env:
- name: HTTP_PROXY
value: "http://localhost:8080"
ports:
- containerPort: 3000
- name: proxy
image: envoyproxy/envoy:v1.28.0
ports:
- containerPort: 8080
volumeMounts:
- name: envoy-config
mountPath: /etc/envoy
volumes:
- name: envoy-config
configMap:
name: envoy-config
App traffic goes through the Envoy sidecar for mTLS, retries, circuit breaking. Your app thinks it's talking to localhost:8080. It has no idea there's a fancy proxy handling everything.
Git Sync Sidecar
Keep files in sync automatically — great for static websites:
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-content
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: content
mountPath: /usr/share/nginx/html
- name: git-sync
image: registry.k8s.io/git-sync/git-sync:v4.1.0
env:
- name: GITSYNC_REPO
value: "https://github.com/myorg/static-content.git"
- name: GITSYNC_ROOT
value: "/content"
- name: GITSYNC_PERIOD
value: "60s"
volumeMounts:
- name: content
mountPath: /content
volumes:
- name: content
emptyDir: {}
The git-sync sidecar pulls updates every 60 seconds. Push to GitHub, and your site updates automatically. No redeploys needed!
Native Sidecar Containers (Kubernetes 1.28+)
"Wait, if sidecars are defined as regular containers, how does Kubernetes know to shut them down last?"
Great question! That's the problem native sidecars solve. Kubernetes 1.28 introduced proper sidecar support with restartPolicy: Always on init containers:
apiVersion: v1
kind: Pod
metadata:
name: app-with-native-sidecar
spec:
initContainers:
- name: log-shipper
image: fluent/fluent-bit
restartPolicy: Always # Makes it a sidecar
volumeMounts:
- name: logs
mountPath: /var/log/app
containers:
- name: app
image: myapp:v1
volumeMounts:
- name: logs
mountPath: /var/log/app
volumes:
- name: logs
emptyDir: {}
Benefits of native sidecars:
- Start before main containers (like init containers) — your logging sidecar is ready when the app starts
- Keep running alongside main containers
- Proper shutdown ordering (sidecars stop last)
- Job support (sidecars don't block Job completion — this was a huge pain before)
This is the future of sidecars in Kubernetes. If you're on 1.28+, use native sidecars.
Real-World Example: Web App with Multiple Patterns
Okay, let's put it all together. Here's a deployment that uses both init containers AND sidecars:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
initContainers:
# 1. Wait for dependencies
- name: wait-for-db
image: busybox
command: ['sh', '-c', 'until nc -z postgres 5432; do sleep 2; done']
# 2. Run database migrations
- name: migrate
image: webapp:v1
command: ['./bin/migrate']
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
# 3. Download static assets
- name: fetch-assets
image: curlimages/curl
command:
- sh
- -c
- curl -o /assets/bundle.js https://cdn.example.com/bundle.js
volumeMounts:
- name: assets
mountPath: /assets
containers:
# Main application
- name: app
image: webapp:v1
ports:
- containerPort: 3000
volumeMounts:
- name: assets
mountPath: /app/public/assets
- name: logs
mountPath: /var/log/app
resources:
requests:
memory: "256Mi"
cpu: "200m"
# Sidecar: Log shipping
- name: fluentbit
image: fluent/fluent-bit
volumeMounts:
- name: logs
mountPath: /var/log/app
readOnly: true
- name: fluent-config
mountPath: /fluent-bit/etc
resources:
requests:
memory: "64Mi"
cpu: "50m"
# Sidecar: Metrics exporter
- name: metrics
image: prom/statsd-exporter
ports:
- containerPort: 9102
resources:
requests:
memory: "32Mi"
cpu: "25m"
volumes:
- name: assets
emptyDir: {}
- name: logs
emptyDir: {}
- name: fluent-config
configMap:
name: fluent-bit-config
This deployment does a lot:
- Waits for database to be ready (init)
- Runs migrations (init)
- Fetches static assets from CDN (init)
- Runs the main app
- Ships logs to central logging (sidecar)
- Exports metrics for Prometheus (sidecar)
Six different concerns, cleanly separated into individual containers. The app code itself just handles business logic. Everything else is infrastructure. Beautiful.
Container Lifecycle
Understanding the lifecycle is important for debugging.
Startup Order
- Init containers run sequentially
- All app containers start simultaneously
- readinessProbe determines when pod is ready
Shutdown Order
- Pod receives SIGTERM
- All containers receive SIGTERM simultaneously (can be messy)
- Containers have
terminationGracePeriodSecondsto exit gracefully - SIGKILL sent if still running (the forceful eviction)
With native sidecars (1.28+), shutdown is much cleaner:
- Main containers stop first
- Sidecars stop last (so logging sidecars catch the final logs!)
Probe Configuration
Don't forget to add probes to your sidecars too — they need health checks just like your main container:
containers:
- name: proxy
image: envoyproxy/envoy
readinessProbe:
httpGet:
path: /ready
port: 15021
initialDelaySeconds: 1
livenessProbe:
httpGet:
path: /ready
port: 15021
initialDelaySeconds: 10
Debugging Multi-Container Pods
When you have multiple containers, you need to specify which one you're talking to.
View All Containers
kubectl get pod myapp -o jsonpath='{.spec.containers[*].name}'
Logs from Specific Container
The -c flag is your friend:
kubectl logs myapp -c app
kubectl logs myapp -c log-shipper
Exec into Specific Container
kubectl exec -it myapp -c app -- /bin/sh
Init Container Logs
kubectl logs myapp -c wait-for-db
Check Init Container Status
kubectl describe pod myapp
Look for the "Init Containers" section showing the status of each. If your pod is stuck in Init:CrashLoopBackOff, one of your init containers is failing. Check its logs!
Common Patterns Summary
Here's a quick reference for when to use what:
| Pattern | Type | Use Case |
|---|---|---|
| Wait for dependency | Init | Database, service ready |
| Schema migration | Init | DB migrations before app |
| Config generation | Init | Render templates |
| Log shipping | Sidecar | Central logging |
| Proxy/mesh | Sidecar | mTLS, traffic control |
| Sync | Sidecar | Git sync, file sync |
| Metrics | Sidecar | Export app metrics |
Clean Up
kubectl delete pod myapp app-with-logging app-with-proxy
What's Next?
Nice work! You now know how to build sophisticated pods with init containers for setup tasks, sidecars for auxiliary functionality, and native sidecars for the best of both worlds.
There's one more important topic before we wrap up this series: Pod Disruption Budgets. When you're doing cluster maintenance and draining nodes, how do you make sure enough of your pods stay running? That's exactly what PDBs solve. Let's go!