🚀 Where Should You Store Terraform State Files for Maximum Efficiency?
9 min read

💡 Introduction
Welcome to the world of Cloud and Infrastructure as Code (IaC)! If you're building infrastructure with Terraform — one of the most popular tools in the DevOps ecosystem — you’ve probably come across the mysterious terraform.tfstate
file. This small yet crucial file is the backbone of how Terraform tracks infrastructure resources.
In this blog, we’ll dive into:
What the
terraform.tfstate
file isWhy relying on a local state file can cause issues
And how to properly store Terraform state remotely using AWS S3 and DynamoDB
Whether you're a student exploring Terraform or a cloud enthusiast looking to follow best practices, this guide will help you understand how state management works and how to secure and scale your infrastructure properly.
So without further ado, let’s get started! 🚀
💡 Prerequisites
Before we dive into configuring remote state storage in Terraform, ensure you have the following setup ready:
✅ An AWS Account – You’ll need access to an AWS account with an IAM user that has at least EC2 Full Access, S3FullAccess and DynamoDBFullAccess permissions. You can grant broader permissions like AdministratorAccess
for learning purposes, but it’s recommended to follow the principle of least privilege in production.
✅ AWS CLI Installed and Configured – Make sure you’ve installed the AWS CLI and configured it with your IAM credentials using:
aws configure
✅ Terraform Installed – Download and install Terraform from the official website. Confirm installation by running:
terraform -v
Once all these are in place, you're ready to start working with Terraform state files!
💡 What is the terraform.tfstate
File?
If you’ve worked with Terraform before and created some resources, you might have noticed that after running terraform init
, a file named terraform.tfstate
is automatically generated. This file is the heart of your Terraform project.
But what exactly does it do?
The terraform.tfstate
file maintains a mapping between the infrastructure code written in your .tf
files and the actual resources deployed in your cloud account. It stores the last known state of those resources, including important metadata like resource IDs, attributes, dependencies, and more.
Here's why this is so important:
When you run
terraform plan
, Terraform reads this file to compare the desired state (what's in your code) with the actual state (what's currently running in your cloud).It then shows you the differences and determines what actions (create, update, delete) are needed to bring the infrastructure in sync with your code.
When you run
terraform apply
, Terraform updates the actual infrastructure and then updates theterraform.tfstate
file accordingly.
In short, this file is what allows Terraform to track, manage, and orchestrate changes to your infrastructure consistently and reliably.
However, there’s a catch — especially when it comes to local state.
💡 Problems with Local State File
While the terraform.tfstate
file plays a critical role in managing your infrastructure, storing it locally comes with serious drawbacks — especially in team environments or production setups.
1. Sensitive Information Exposure
The state file often contains sensitive data like:
API keys
Passwords
Resource metadata and configuration values
If this file is stored locally and shared improperly (e.g., committed to Git), it can lead to security risks and potential breaches.
2. No Single Source of Truth
Let’s say two developers are working on the same Terraform project with local state files. If both of them:
Make changes
Apply them independently
And have different versions of the
terraform.tfstate
file
...this can lead to conflicting updates, resource mismatches, or worse — an endless loop of undoing each other's changes. The result? Chaos and an unreliable infrastructure state.
3. Lack of Collaboration and Control
Local state doesn’t support features like:
State Locking: Prevents multiple users from making concurrent changes
Versioning: Track history of infrastructure changes
Secure Sharing: Centralized access for teams
High Availability: No risk of losing state due to a lost local machine
💡 Why Remote State is Recommended
To address these issues, Terraform allows storing state remotely using backends like:
Amazon S3 (with DynamoDB for state locking)
Azure Blob Storage
Google Cloud Storage
HashiCorp Cloud Platform (HCP) Terraform
Remote backends provide:
Encrypted storage
Automatic versioning
Team-friendly collaboration
State locking to avoid simultaneous updates
For most real-world projects — especially when working in a team or managing production infrastructure — configuring a remote backend is not just a best practice, it’s a necessity.
Next, let’s walk through how to do this using AWS S3 and DynamoDB.
💡 Setting Up Remote Terraform Backend with AWS S3 and DynamoDB
Now that we understand the problems with local state, let’s see how to properly configure remote state storage using AWS S3 (for storing the state file) and DynamoDB (for state locking).
Instead of manually creating the required AWS resources, we’ll automate the setup using a simple bash script.
🔧 Step 1: Automate Backend Resource Creation
Create a new file named config.sh
and paste the following content into it:
#!/bin/bash
set -e
# Environment Variables
AWS_REGION="us-east-1"
S3_BUCKET_NAME="pravesh-terraform-state-bucket-2025"
DYNAMODB_TABLE_NAME="terraform-state-lock"
STATE_KEY="terraform/terraform.tfstate"
echo "--- Creating AWS Resources for Terraform Backend ---"
echo ""
# 1. Create S3 Bucket
echo "Creating S3 bucket: $S3_BUCKET_NAME in region $AWS_REGION..."
aws s3api create-bucket \
--bucket "$S3_BUCKET_NAME" \
--region "$AWS_REGION"
echo "Enabling versioning on S3 bucket..."
aws s3api put-bucket-versioning \
--bucket "$S3_BUCKET_NAME" \
--versioning-configuration Status=Enabled
echo "S3 bucket created and versioning enabled."
echo ""
# 2. Create DynamoDB Table for State Locking
echo "Creating DynamoDB table: $DYNAMODB_TABLE_NAME..."
aws dynamodb create-table \
--table-name "$DYNAMODB_TABLE_NAME" \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
--region "$AWS_REGION"
echo "DynamoDB table created for state locking."
echo ""
Make the script executable and run it:
chmod u+x config.sh
./config.sh
✅ This script will create:
An S3 bucket with versioning enabled to store your
terraform.tfstate
A DynamoDB table for locking and preventing concurrent state operations
🚀 Step 2: Create a Terraform Project to Provision an NGINX Server
Now let’s set up a basic Terraform project that provisions an EC2 instance running an NGINX server with a portfolio page.
Create a new directory called basic-terra
and inside it, add the following files:
📄 provider.tf
provider "aws" {
region = "us-east-1"
}
📄 main.tf
resource "aws_security_group" "sg" {
name = "Basic-Security Group"
description = "Allow port 80 for HTTP"
tags = {
Name = "Basic-sg"
}
}
resource "aws_vpc_security_group_egress_rule" "example" {
security_group_id = aws_security_group.sg.id
cidr_ipv4 = "0.0.0.0/0"
ip_protocol = "-1"
}
resource "aws_vpc_security_group_ingress_rule" "example" {
security_group_id = aws_security_group.sg.id
cidr_ipv4 = "0.0.0.0/0"
from_port = 80
to_port = 80
ip_protocol = "tcp"
}
resource "aws_instance" "web" {
ami = "ami-020cba7c55df1f615" # Use a valid AMI ID for your region
instance_type = "t2.micro"
security_groups = [aws_security_group.sg.name]
user_data = file("userdata.sh")
tags = {
Name = "basic-terra"
}
}
output "instance_public_ip" {
value = aws_instance.web.public_ip
description = "Website is running on this address:"
}
📄 userdata.sh
#!/bin/bash
# Install NGINX
apt update -y
apt install nginx -y
# Start NGINX service
systemctl enable nginx
systemctl start nginx
# Portfolio HTML
cat <<EOF > /var/www/html/index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Pravesh Sudha | Portfolio</title>
<style>
body {
font-family: Arial, sans-serif;
background-color: #f4f4f4;
text-align: center;
padding: 50px;
}
h1 { color: #333; }
p { font-size: 18px; color: #666; }
a { color: #007BFF; text-decoration: none; }
</style>
</head>
<body>
<h1>Hi, I'm Pravesh Sudha</h1>
<p>DevOps · Cloud · Content Creator</p>
<p>
<a href="https://blog.praveshsudha.com" target="_blank">Blog</a> |
<a href="https://x.com/praveshstwt" target="_blank">Twitter</a> |
<a href="https://www.youtube.com/@pravesh-sudha" target="_blank">YouTube</a> |
<a href="https://www.linkedin.com/in/pravesh-sudha/" target="_blank">LinkedIn</a>
</p>
</body>
</html>
EOF
📄 backend.tf
terraform {
backend "s3" {
bucket = "pravesh-terraform-state-bucket-2025"
key = "terraform/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-lock"
encrypt = true
}
}
✅ Step 3: Initialize, Plan & Apply
Now inside the basic-terra
directory, run the following commands:
terraform init
terraform plan
terraform apply --auto-approve
Once the resources are created, Terraform will output the public IP of the EC2 instance. Open it in your browser and you’ll see your personal portfolio hosted via NGINX! 🎉
📦 Check Your State File
Now visit your S3 console and open the bucket my-terraform-state-bucket-2025
. You’ll see your terraform.tfstate
file stored securely with:
Encryption enabled
Versioning in place
State locking handled by DynamoDB
This setup not only makes your state secure but also production-grade and team-ready!
🧹 Cleaning Up: Tearing the Resources Down
Before we wrap up, let’s clean up all the resources we created to avoid unnecessary AWS charges.
Step 1: Destroy Terraform-managed Resources
Navigate to your project directory basic-terra
and run the following command:
terraform destroy --auto-approve
This will:
Terminate the EC2 instance
Delete the security group and its associated ingress/egress rules
Step 2: Delete the S3 Bucket and DynamoDB Table
The S3 bucket and DynamoDB table were created manually via the config.sh
script, so we’ll clean them up using another automation script.
Create a file called delete.sh
and paste the following content:
#!/bin/bash
set -e
# Environment Variables
AWS_REGION="us-east-1"
S3_BUCKET_NAME="pravesh-terraform-state-bucket-2025"
DYNAMODB_TABLE_NAME="terraform-state-lock"
echo "--- Deleting AWS Resources for Terraform Backend ---"
echo ""
# Empty the S3 Bucket (including versions and delete markers)
echo "Emptying S3 bucket: $S3_BUCKET_NAME..."
objects_to_delete=$(aws s3api list-object-versions \
--bucket "$S3_BUCKET_NAME" \
--output=json \
--query='{Objects: Versions[].[Key,VersionId],DeleteMarkers:DeleteMarkers[].[Key,VersionId]}' \
--region "$AWS_REGION")
if [ "$(echo "$objects_to_delete" | jq '.Objects | length')" -gt 0 ] || \
[ "$(echo "$objects_to_delete" | jq '.DeleteMarkers | length')" -gt 0 ]; then
delete_payload=$(echo "$objects_to_delete" | jq -c '{Objects: (.Objects + .DeleteMarkers | map({Key: .[0], VersionId: .[1]}) | unique)}')
aws s3api delete-objects \
--bucket "$S3_BUCKET_NAME" \
--delete "$delete_payload" \
--region "$AWS_REGION"
echo "S3 bucket emptied."
else
echo "S3 bucket is already empty."
fi
# Delete the S3 Bucket
echo "Deleting S3 bucket..."
aws s3 rb s3://"$S3_BUCKET_NAME" --region "$AWS_REGION" --force
echo "S3 bucket deleted."
# Delete the DynamoDB Table
echo "Deleting DynamoDB table..."
aws dynamodb delete-table \
--table-name "$DYNAMODB_TABLE_NAME" \
--region "$AWS_REGION"
echo "DynamoDB table deleted."
echo "✅ Terraform backend resources deleted successfully."
Make the script executable and run it:
chmod u+x delete.sh
./delete.sh
This script will:
Empty the S3 bucket including all versions and delete markers
Delete the bucket itself
Remove the DynamoDB table
🏁 Conclusion
Managing Terraform state properly is not just a best practice — it's essential for building reliable, secure, and scalable infrastructure. In this blog, we explored the importance of the terraform.tfstate
file, the risks of keeping it local, and how to overcome those risks by configuring remote state storage using AWS S3 and DynamoDB.
We also took a hands-on approach to:
Automate backend resource creation with a bash script
Deploy an EC2 instance running NGINX to host a simple portfolio
Clean up all resources to avoid charges
Whether you’re a beginner exploring Infrastructure as Code or someone working on real-world cloud projects, using remote state and state locking will take your Terraform workflows to the next level. 🌍
If you found this helpful, feel free to connect with me and follow my work:
Thanks for reading, and happy Terraforming! 💻🚀