Testing Helm Charts
Test your Helm charts with helm test, helm lint, and helm template. Catch errors before they hit production.
Testing Helm Charts
In the previous tutorial, we packaged and published charts. But how do you know your chart actually works? A chart that renders broken YAML, forgets a required label, or deploys a misconfigured Service is worse than no chart at all. Let's build a proper testing strategy.
The Testing Pyramid for Helm
╱╲
╱ ╲ Integration Tests
╱ ╲ (helm test, actually deploy)
╱──────╲
╱ ╲ Template Tests
╱ ╲ (render + assert on output)
╱────────────╲
╱ ╲ Static Analysis
╱ ╲ (helm lint, kubeval, kubeconform)
╱──────────────────╲
Start at the bottom (fast, cheap) and work up (slow, thorough).
Level 1: helm lint
The most basic check. It validates chart structure and catches common mistakes:
helm lint ./my-chart
# ==> Linting ./my-chart
# [INFO] Chart.yaml: icon is recommended
# [WARNING] templates/deployment.yaml: object name does not conform to
# Kubernetes naming requirements: "MY_APP"
# 1 chart(s) linted, 0 chart(s) failed
Lint catches:
- Missing required fields in
Chart.yaml - Malformed templates
- YAML syntax errors
- Some best practice violations
# Lint with specific values
helm lint ./my-chart -f production.yaml
# Lint with strict mode (warnings become errors)
helm lint ./my-chart --strict
# Lint multiple charts
helm lint ./charts/*
"Should I lint in CI?"
Absolutely. It's fast and catches obvious mistakes. Make it the first check in your pipeline.
Level 2: helm template
Render your templates locally without connecting to a cluster:
# Render all templates
helm template my-release ./my-chart
# Render with production values
helm template my-release ./my-chart -f production.yaml
# Render only specific templates
helm template my-release ./my-chart -s templates/deployment.yaml
# Render with debug info
helm template my-release ./my-chart --debug
This is invaluable for:
- Verifying template output is valid YAML
- Checking that conditional blocks render correctly
- Ensuring values produce the expected manifests
Validating Against Kubernetes Schemas
helm template just renders YAML — it doesn't check if the YAML is valid Kubernetes. Use kubeconform for that:
# Install kubeconform
brew install kubeconform
# Render and validate
helm template my-release ./my-chart | kubeconform -strict
# Validate against a specific K8s version
helm template my-release ./my-chart | kubeconform -kubernetes-version 1.28.0
# With custom values
helm template my-release ./my-chart -f production.yaml | kubeconform -strict
kubeconform validates every rendered resource against the official Kubernetes OpenAPI schemas. It catches things like:
- Invalid field names (typos like
contianerinstead ofcontainer) - Wrong field types (
replicas: "3"instead ofreplicas: 3) - Missing required fields
- Deprecated API versions
Level 3: Template Unit Tests
For serious charts, use a testing framework that lets you write assertions against rendered output.
helm-unittest
helm-unittest is the most popular option:
# Install as a Helm plugin
helm plugin install https://github.com/helm-unittest/helm-unittest
Create tests in tests/ directory:
# tests/deployment_test.yaml
suite: deployment tests
templates:
- templates/deployment.yaml
tests:
- it: should set replicas from values
set:
replicaCount: 5
asserts:
- equal:
path: spec.replicas
value: 5
- it: should use the correct image
set:
image.repository: my-app
image.tag: "2.0.0"
asserts:
- equal:
path: spec.template.spec.containers[0].image
value: "my-app:2.0.0"
- it: should not set replicas when autoscaling is enabled
set:
autoscaling.enabled: true
asserts:
- notExists:
path: spec.replicas
- it: should set resource limits
set:
resources.limits.cpu: 200m
resources.limits.memory: 256Mi
asserts:
- equal:
path: spec.template.spec.containers[0].resources.limits.cpu
value: 200m
- equal:
path: spec.template.spec.containers[0].resources.limits.memory
value: 256Mi
Run the tests:
helm unittest ./my-chart
# PASS deployment tests
# ✓ should set replicas from values
# ✓ should use the correct image
# ✓ should not set replicas when autoscaling is enabled
# ✓ should set resource limits
#
# Charts: 1 passed, 1 total
# Test Suites: 1 passed, 1 total
# Tests: 4 passed, 4 total
More Assertion Types
tests:
# Check a value exists
- it: should have labels
asserts:
- exists:
path: metadata.labels
# Check contains (for lists)
- it: should have http port
asserts:
- contains:
path: spec.template.spec.containers[0].ports
content:
name: http
containerPort: 80
# Check the resource renders at all
- it: should render ingress when enabled
template: templates/ingress.yaml
set:
ingress.enabled: true
asserts:
- hasDocuments:
count: 1
# Check the resource is NOT rendered
- it: should not render ingress when disabled
template: templates/ingress.yaml
set:
ingress.enabled: false
asserts:
- hasDocuments:
count: 0
# Regex matching
- it: should have valid name
asserts:
- matchRegex:
path: metadata.name
pattern: ^[a-z][a-z0-9-]*$
# Snapshot testing
- it: should match snapshot
asserts:
- matchSnapshot: {}
Snapshot Testing
Snapshot tests save the rendered output and compare future renders against it:
# First run: creates snapshots
helm unittest ./my-chart
# Subsequent runs: compares against saved snapshots
helm unittest ./my-chart
# Update snapshots when changes are intentional
helm unittest ./my-chart --update-snapshot
Snapshots live in tests/__snapshot__/ and should be committed to version control.
Level 4: helm test (Integration Tests)
helm test runs actual pods in your cluster to verify a release works. Test pods are defined as templates with the "helm.sh/hook": test annotation:
# templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "my-chart.fullname" . }}-test-connection"
labels:
{{- include "my-chart.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "my-chart.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never
Run the tests:
# Install first
helm install my-release ./my-chart
# Then test
helm test my-release
# NAME: my-release
# LAST DEPLOYED: Wed Jan 17 09:15:00 2024
# STATUS: deployed
# TEST SUITE: my-release-test-connection
# Last Started: Wed Jan 17 09:16:00 2024
# Last Completed: Wed Jan 17 09:16:05 2024
# Phase: Succeeded
Real-World Integration Tests
# templates/tests/test-api-health.yaml
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "my-chart.fullname" . }}-test-api"
annotations:
"helm.sh/hook": test
"helm.sh/hook-delete-policy": hook-succeeded
spec:
containers:
- name: test
image: curlimages/curl:latest
command:
- /bin/sh
- -c
- |
echo "Testing health endpoint..."
STATUS=$(curl -s -o /dev/null -w '%{http_code}' \
http://{{ include "my-chart.fullname" . }}:{{ .Values.service.port }}/health)
if [ "$STATUS" = "200" ]; then
echo "Health check passed!"
exit 0
else
echo "Health check failed with status: $STATUS"
exit 1
fi
restartPolicy: Never
# templates/tests/test-db-connection.yaml
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "my-chart.fullname" . }}-test-db"
annotations:
"helm.sh/hook": test
"helm.sh/hook-delete-policy": hook-succeeded
spec:
containers:
- name: test
image: postgres:16-alpine
command:
- /bin/sh
- -c
- |
pg_isready -h {{ .Release.Name }}-postgresql -p 5432
if [ $? -eq 0 ]; then
echo "Database is reachable!"
exit 0
else
echo "Cannot reach database"
exit 1
fi
restartPolicy: Never
Putting It All Together — CI Pipeline
Here's what a comprehensive chart testing pipeline looks like:
#!/bin/bash
set -euo pipefail
CHART_DIR="./my-chart"
echo "=== Step 1: Lint ==="
helm lint "$CHART_DIR" --strict
echo "=== Step 2: Template Render ==="
helm template test-release "$CHART_DIR" > /dev/null
echo "=== Step 3: Schema Validation ==="
helm template test-release "$CHART_DIR" | kubeconform -strict -kubernetes-version 1.28.0
echo "=== Step 4: Unit Tests ==="
helm unittest "$CHART_DIR"
echo "=== Step 5: Install in Test Cluster ==="
helm upgrade --install test-release "$CHART_DIR" \
--namespace test \
--create-namespace \
--wait \
--timeout 5m
echo "=== Step 6: Integration Tests ==="
helm test test-release --namespace test --timeout 5m
echo "=== Step 7: Cleanup ==="
helm uninstall test-release --namespace test
echo "All tests passed!"
Chart Testing Tool (ct)
The chart-testing tool (ct) is purpose-built for testing charts in CI, especially when you have multiple charts in a repo:
# Install
brew install chart-testing
# Lint all charts that changed (git diff)
ct lint --target-branch main
# Install and test changed charts
ct install --target-branch main
# Both
ct lint-and-install --target-branch main
ct automatically:
- Detects which charts changed via git diff
- Lints only changed charts
- Creates temporary namespaces for testing
- Runs
helm installandhelm test - Cleans up after itself
Configuration via ct.yaml:
# ct.yaml
chart-dirs:
- charts
target-branch: main
helm-extra-args: --timeout 600s
validate-maintainers: false
Debugging Test Failures
# See test pod logs
helm test my-release --logs
# If a test pod is still around, check it manually
kubectl logs my-release-test-connection
# Check events
kubectl describe pod my-release-test-connection
# Run tests without cleanup for debugging
helm test my-release
# Then manually inspect: kubectl get pods, kubectl logs, etc.
What's Next?
You now have a full testing toolkit — from fast static analysis to real integration tests running in a cluster.
In the next and final tutorial, we'll bring everything together with Helm in CI/CD pipelines — automating chart testing, releasing, and deployment with GitHub Actions.