Terraform Remote State: S3 Only vs S3 + DynamoDB Locking

· 7 min read ·
AWSTerraformDevOpsIaC

Disclaimer: This article reflects my own experience and setup. Your team size, pipeline structure, and risk tolerance will influence which approach fits best. Use this as a reference, not a prescription.

Terraform state is a single source of truth for your infrastructure. By default it lives in a local terraform.tfstate file, which works fine until two people run terraform apply at the same time and one of them overwrites the other’s changes with a stale state. Remote state on S3 solves the storage problem. DynamoDB locking solves the concurrency problem. They are separate concerns, and whether you need both depends on how your team works.


What Remote State Actually Does

When you configure an S3 backend, Terraform reads the state file from S3 before planning and writes it back after applying. Every team member and every CI/CD pipeline works from the same state, so there’s no drift between what one person’s local file says and what actually exists in AWS.

terraform {
  backend "s3" {
    bucket = "my-tfstate-bucket"
    key    = "project/terraform.tfstate"
    region = "ap-southeast-5"
  }
}

That’s the minimum. No DynamoDB, no locking - just shared state storage.


S3 Only

What You Get

What You Give Up

When S3 Only Is Enough

S3 alone is a reasonable choice when:

For solo projects, personal infrastructure, or tightly controlled pipelines, the DynamoDB table is overhead with no practical benefit.


S3 + DynamoDB Locking

DynamoDB locking adds a distributed lock on top of S3 state. When Terraform starts an operation that modifies state (plan with -out, apply, destroy), it writes a lock entry to the DynamoDB table. Any other Terraform process that tries to acquire the same lock while it’s held gets an error:

Error: Error acquiring the state lock
Lock Info:
  ID:        a1b2c3d4-...
  Path:      project/terraform.tfstate
  Operation: OperationTypeApply
  Who:       irfan@hostname
  Version:   1.9.0
  Created:   2025-10-13 08:22:11

The lock is released when the operation completes. If a process crashes mid-apply and leaves a stale lock, it can be force-unlocked with terraform force-unlock <lock-id>.

Configuration

terraform {
  backend "s3" {
    bucket         = "my-tfstate-bucket"
    key            = "project/terraform.tfstate"
    region         = "ap-southeast-5"
    dynamodb_table = "terraform-state-lock"
  }
}

The DynamoDB table needs a single partition key named LockID of type String. Nothing else is required.

resource "aws_dynamodb_table" "tf_state_lock" {
  name         = "terraform-state-lock"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}

PAY_PER_REQUEST is the right billing mode here. The table gets one write per apply and one delete when the lock is released - the traffic is negligible and provisioned capacity would be wasteful.

What You Get

What You Give Up


The Bootstrapping Problem

Both the S3 bucket and the DynamoDB table must exist before you can run terraform init with this backend. You can’t use Terraform to create them in the same configuration that uses them as a backend - that’s a chicken-and-egg problem.

Common solutions:

# Bootstrap the state infrastructure manually
aws s3api create-bucket \
  --bucket my-tfstate-bucket \
  --region ap-southeast-5 \
  --create-bucket-configuration LocationConstraint=ap-southeast-5

aws s3api put-bucket-versioning \
  --bucket my-tfstate-bucket \
  --versioning-configuration Status=Enabled

aws dynamodb create-table \
  --table-name terraform-state-lock \
  --attribute-definitions AttributeName=LockID,AttributeType=S \
  --key-schema AttributeName=LockID,KeyType=HASH \
  --billing-mode PAY_PER_REQUEST \
  --region ap-southeast-5

Side-by-Side Comparison

DimensionS3 OnlyS3 + DynamoDB
Concurrent apply safetyNoneFull lock with error on conflict
State historyYes (with S3 versioning)Yes (with S3 versioning)
Who-is-applying visibilityNoneYes, in lock entry
Setup complexityOne S3 bucketS3 bucket + DynamoDB table
CostS3 storage only (~cents/month)S3 + DynamoDB (still negligible)
Right for solo useYesYes, but unnecessary
Right for teamsOnly if pipeline serialises appliesYes
Bootstrapping requiredS3 bucket onlyS3 bucket + DynamoDB table

Closing Thoughts

The decision comes down to one question: can two Terraform applies against this state run at the same time?

If the answer is no - solo project, serialised CI, single pipeline job - S3 alone is sufficient. Versioning handles recovery, and the DynamoDB table adds complexity without a practical benefit.

If the answer is yes or maybe - multiple engineers, parallel CI jobs, no pipeline-level serialisation - add DynamoDB. The cost is negligible, the setup is a one-time ten-minute job, and it prevents the kind of state corruption that is genuinely painful to recover from.


Further Reading