Workspaces & Environments

Manage dev, staging, and production environments with Terraform

8 min read

Workspaces & Environments

In the previous tutorial, we learned how to manage secrets safely. Now let's solve another real-world problem: you need dev, staging, and production. Same infrastructure, different sizes, separate state.

There are multiple ways to handle this, and honestly, the Terraform community argues about the "right" approach like it's politics. Let's explore them all so you can decide.

Terraform Workspaces

Workspaces are Terraform's built-in way to manage multiple environments with the same configuration. Simple to set up — but they come with some sharp edges.

Basic Commands

# List workspaces
terraform workspace list
# * default

# Create workspace
terraform workspace new dev
terraform workspace new staging
terraform workspace new prod

# Switch workspace
terraform workspace select dev

# Show current
terraform workspace show
# dev

# Delete workspace
terraform workspace delete dev

Using Workspace in Config

# Access current workspace name
locals {
  environment = terraform.workspace
}

resource "aws_instance" "web" {
  ami           = var.ami_id
  instance_type = terraform.workspace == "prod" ? "t2.large" : "t2.micro"

  tags = {
    Name        = "web-${terraform.workspace}"
    Environment = terraform.workspace
  }
}

Workspace-Specific Variables

locals {
  env_config = {
    dev = {
      instance_type = "t2.micro"
      instance_count = 1
      db_instance_class = "db.t3.micro"
    }
    staging = {
      instance_type = "t2.small"
      instance_count = 2
      db_instance_class = "db.t3.small"
    }
    prod = {
      instance_type = "t2.large"
      instance_count = 3
      db_instance_class = "db.t3.large"
    }
  }

  config = local.env_config[terraform.workspace]
}

resource "aws_instance" "web" {
  count         = local.config.instance_count
  instance_type = local.config.instance_type
  # ...
}

Workspace State Storage

With S3 backend, workspaces create separate state files:

terraform {
  backend "s3" {
    bucket = "my-terraform-state"
    key    = "myapp/terraform.tfstate"
    region = "us-west-2"
  }
}

State files end up at:

  • env:/dev/myapp/terraform.tfstate
  • env:/staging/myapp/terraform.tfstate
  • env:/prod/myapp/terraform.tfstate

Workspace Limitations

"So workspaces are perfect?"

Not exactly. There are some real gotchas:

  1. Same configuration — All environments use identical code (can't add prod-only features easily)
  2. Easy mistakes — One wrong command affects production
  3. No visual separation — Just a name difference
# Scary scenario
terraform workspace select prod  # Oops, thought I was in dev
terraform destroy               # There goes production šŸ’„

Yeah. That happens. More often than anyone admits.

Directory-Per-Environment

"Is there a safer way?"

More isolation, more safety. Each environment gets its own directory:

terraform/
ā”œā”€ā”€ modules/
│   ā”œā”€ā”€ vpc/
│   ā”œā”€ā”€ compute/
│   └── database/
ā”œā”€ā”€ environments/
│   ā”œā”€ā”€ dev/
│   │   ā”œā”€ā”€ main.tf
│   │   ā”œā”€ā”€ variables.tf
│   │   ā”œā”€ā”€ terraform.tfvars
│   │   └── backend.tf
│   ā”œā”€ā”€ staging/
│   │   ā”œā”€ā”€ main.tf
│   │   ā”œā”€ā”€ variables.tf
│   │   ā”œā”€ā”€ terraform.tfvars
│   │   └── backend.tf
│   └── prod/
│       ā”œā”€ā”€ main.tf
│       ā”œā”€ā”€ variables.tf
│       ā”œā”€ā”€ terraform.tfvars
│       └── backend.tf

Environment Configuration

# environments/dev/main.tf
module "infrastructure" {
  source = "../../modules/app"

  environment    = "dev"
  instance_type  = "t2.micro"
  instance_count = 1
}
# environments/prod/main.tf
module "infrastructure" {
  source = "../../modules/app"

  environment    = "prod"
  instance_type  = "t2.large"
  instance_count = 3
  
  # Prod-only features
  enable_monitoring = true
  enable_backups    = true
  multi_az          = true
}

Separate Backends

# environments/dev/backend.tf
terraform {
  backend "s3" {
    bucket         = "mycompany-terraform-state"
    key            = "dev/terraform.tfstate"
    region         = "us-west-2"
    dynamodb_table = "terraform-locks"
  }
}

# environments/prod/backend.tf
terraform {
  backend "s3" {
    bucket         = "mycompany-terraform-state"
    key            = "prod/terraform.tfstate"  # Different key
    region         = "us-west-2"
    dynamodb_table = "terraform-locks"
  }
}

Working with Directories

# Dev environment
cd environments/dev
terraform init
terraform plan
terraform apply

# Prod environment
cd ../prod
terraform init
terraform plan
terraform apply

You physically can't accidentally destroy prod while in the dev directory. That's the beauty of it.

tfvars Per Environment

Single directory, multiple tfvars files:

terraform/
ā”œā”€ā”€ main.tf
ā”œā”€ā”€ variables.tf
ā”œā”€ā”€ outputs.tf
ā”œā”€ā”€ dev.tfvars
ā”œā”€ā”€ staging.tfvars
└── prod.tfvars
# dev.tfvars
environment    = "dev"
instance_type  = "t2.micro"
instance_count = 1
enable_cdn     = false

# prod.tfvars
environment    = "prod"
instance_type  = "t2.large"
instance_count = 3
enable_cdn     = true
# Apply with specific tfvars
terraform apply -var-file="dev.tfvars"
terraform apply -var-file="prod.tfvars"

Problem: Same State

All environments share one state file. Bad idea for isolation. Like giving everyone in the office the same locker.

Solution: Backend Partial Config

# backend.tf
terraform {
  backend "s3" {
    bucket         = "mycompany-terraform-state"
    region         = "us-west-2"
    dynamodb_table = "terraform-locks"
    # key is set during init
  }
}
# Dev
terraform init -backend-config="key=dev/terraform.tfstate"
terraform apply -var-file="dev.tfvars"

# Prod
terraform init -backend-config="key=prod/terraform.tfstate"
terraform apply -var-file="prod.tfvars"

Terragrunt

"All these approaches have trade-offs. Is there something better?"

Terragrunt is a wrapper around Terraform that solves the DRY problem for environments. More setup, but incredibly powerful.

Directory Structure

infrastructure/
ā”œā”€ā”€ terragrunt.hcl            # Root config
ā”œā”€ā”€ modules/
│   └── app/
└── environments/
    ā”œā”€ā”€ dev/
    │   └── terragrunt.hcl
    ā”œā”€ā”€ staging/
    │   └── terragrunt.hcl
    └── prod/
        └── terragrunt.hcl

Root Config

# terragrunt.hcl
remote_state {
  backend = "s3"
  generate = {
    path      = "backend.tf"
    if_exists = "overwrite"
  }
  config = {
    bucket         = "mycompany-terraform-state"
    key            = "${path_relative_to_include()}/terraform.tfstate"
    region         = "us-west-2"
    encrypt        = true
    dynamodb_table = "terraform-locks"
  }
}

Environment Config

# environments/dev/terragrunt.hcl
include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "../../modules/app"
}

inputs = {
  environment    = "dev"
  instance_type  = "t2.micro"
  instance_count = 1
}
# environments/prod/terragrunt.hcl
include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "../../modules/app"
}

inputs = {
  environment    = "prod"
  instance_type  = "t2.large"
  instance_count = 3
  enable_backups = true
}

Using Terragrunt

cd environments/dev
terragrunt apply

cd ../prod
terragrunt apply

# Apply all environments
cd environments
terragrunt run-all apply

Multi-Account Strategy

Production in a separate AWS account:

terraform/
ā”œā”€ā”€ modules/
└── accounts/
    ā”œā”€ā”€ development/      # AWS Account: 111111111111
    │   └── main.tf
    ā”œā”€ā”€ staging/          # AWS Account: 222222222222
    │   └── main.tf
    └── production/       # AWS Account: 333333333333
        └── main.tf

Assume Role

# accounts/production/providers.tf
provider "aws" {
  region = "us-west-2"

  assume_role {
    role_arn     = "arn:aws:iam::333333333333:role/TerraformRole"
    session_name = "terraform-prod"
  }
}

Cross-Account State

# Production account reads dev VPC for peering
data "terraform_remote_state" "dev_vpc" {
  backend = "s3"

  config = {
    bucket = "dev-terraform-state"  # In dev account
    key    = "vpc/terraform.tfstate"
    region = "us-west-2"
    
    role_arn = "arn:aws:iam::111111111111:role/TerraformStateReader"
  }
}

resource "aws_vpc_peering_connection" "dev_to_prod" {
  vpc_id        = aws_vpc.prod.id
  peer_vpc_id   = data.terraform_remote_state.dev_vpc.outputs.vpc_id
  peer_owner_id = "111111111111"
}

Comparing Approaches

Here's the cheat sheet:

ApproachIsolationFlexibilityComplexity
WorkspacesLowLowLow
Directory per envHighMediumMedium
tfvars + partial backendMediumMediumMedium
TerragruntHighHighHigh
Multi-accountHighestHighestHighest

Recommendations

Pick what fits your team:

  • Small projects: tfvars per environment (keep it simple)
  • Medium projects: Directory per environment (safer isolation)
  • Large projects: Terragrunt or Terraform Cloud (DRY + automation)
  • Enterprise: Multi-account + Terragrunt (maximum isolation)

Environment Promotion

GitOps Workflow

feature-branch → dev → staging → prod
  1. Develop in feature branch
  2. Merge to main → auto-deploy to dev
  3. Tag v1.2.3 → deploy to staging
  4. Approve in staging → deploy to prod

Terraform Cloud/Enterprise

# workspace: myapp-dev
# workspace: myapp-staging  (requires approval)
# workspace: myapp-prod     (requires approval)

Visual approval workflow with run history.

Practical Example

Complete multi-environment setup:

terraform/
ā”œā”€ā”€ modules/
│   └── web-app/
│       ā”œā”€ā”€ main.tf
│       ā”œā”€ā”€ variables.tf
│       └── outputs.tf
ā”œā”€ā”€ environments/
│   ā”œā”€ā”€ shared/
│   │   └── common_tags.tf  # Symlinked to each env
│   ā”œā”€ā”€ dev/
│   │   ā”œā”€ā”€ main.tf
│   │   ā”œā”€ā”€ providers.tf
│   │   ā”œā”€ā”€ backend.tf
│   │   └── terraform.tfvars
│   └── prod/
│       ā”œā”€ā”€ main.tf
│       ā”œā”€ā”€ providers.tf
│       ā”œā”€ā”€ backend.tf
│       └── terraform.tfvars
└── scripts/
    └── deploy.sh

Module

# modules/web-app/main.tf
variable "environment" {}
variable "instance_type" {}
variable "instance_count" {}
variable "enable_https" { default = false }
variable "domain_name" { default = "" }

resource "aws_instance" "web" {
  count         = var.instance_count
  ami           = data.aws_ami.amazon_linux.id
  instance_type = var.instance_type

  tags = {
    Name        = "${var.environment}-web-${count.index + 1}"
    Environment = var.environment
  }
}

resource "aws_lb" "web" {
  name               = "${var.environment}-web-lb"
  load_balancer_type = "application"
  # ...
}

resource "aws_acm_certificate" "web" {
  count           = var.enable_https ? 1 : 0
  domain_name     = var.domain_name
  validation_method = "DNS"
}

Dev Environment

# environments/dev/main.tf
module "web_app" {
  source = "../../modules/web-app"

  environment    = "dev"
  instance_type  = "t2.micro"
  instance_count = 1
  enable_https   = false
}

Prod Environment

# environments/prod/main.tf
module "web_app" {
  source = "../../modules/web-app"

  environment    = "prod"
  instance_type  = "t2.large"
  instance_count = 3
  enable_https   = true
  domain_name    = "app.example.com"
}

Deploy Script

#!/bin/bash
# scripts/deploy.sh

ENV=${1:-dev}

cd "$(dirname "$0")/../environments/$ENV"

terraform init
terraform plan -out=tfplan
terraform apply tfplan
./scripts/deploy.sh dev
./scripts/deploy.sh prod

What's Next?

You now have a solid grasp on environment management. You learned:

  • Terraform workspaces (simple but risky)
  • Directory-per-environment (safe and isolated)
  • tfvars with partial backends (middle ground)
  • Terragrunt for DRY environments
  • Multi-account architecture (enterprise-grade)

Next up: what do you do when you have existing infrastructure that Terraform doesn't know about? Let's learn importing resources. Let's go!