🌟 Learn how to Deploy a Three-Tier Application on AWS EKS Using Terraform with best practices
12 min read

💡 Introduction
Hey developers! 👋
Welcome to the world of cloud computing and automation. In this blog, we’re going to walk through an exciting real-world project — deploying a three-tier Todo List application on Amazon EKS (Elastic Kubernetes Service) using Terraform.
This project is perfect if you're looking to get hands-on experience with:
Provisioning infrastructure using Terraform
Working with Docker to containerize services
Deploying applications on AWS using EKS, ECR, IAM, and more
We’ll break it down step-by-step — from writing Terraform code to spinning up your Kubernetes cluster, containerizing the frontend, backend, and MongoDB services, and deploying everything seamlessly.
Whether you're new to DevOps or brushing up on your cloud skills, this guide will help you understand how everything connects in a modern microservices-based deployment.
So without further ado, let’s get started and bring our infrastructure to life! 🌐🛠️
🔧 Prerequisites: What You’ll Need Before We Start
Before we dive into the fun part — building and deploying — let’s quickly make sure your system is ready for action. Here’s what you’ll need:
✅ An AWS Account
If you don’t already have one, head over to aws.amazon.com and sign up. We’ll be using AWS services like EKS (Elastic Kubernetes Service), ECR (Elastic Container Registry), and IAM (Identity and Access Management), so having an account is essential.
✅ Docker Installed
We’ll use Docker to containerise the three components of our app: the frontend, backend, and MongoDB database. You can download Docker Desktop from the official Docker website and install it like any other app.
✅ Terraform Installed
Terraform will be our tool of choice for provisioning the infrastructure on AWS. You can download Terraform from terraform.io. Just install it — no need to configure anything yet.
That’s it! Once you have these basics set up, you’re good to go. Let’s start building!
🔐 Step 1: Set Up AWS CLI and IAM User
Before Terraform can talk to AWS and spin up resources, we need to set up the AWS CLI and create an IAM user with the right permissions. Let’s walk through it step-by-step.
👤 Create an IAM User
Log in to your AWS account as the root user (the one you used to sign up).
In the AWS Management Console, go to IAM > Users and click on “Create User”.
Give the user a name — something like
three-tier-user
works great — and click Next.On the Set Permissions page, attach the policy named AdministratorAccess.
⚠️ Important: We’re giving full admin access here just to avoid permission issues during learning and experimentation. Never use this approach in production — always follow the Principle of Least Privilege!
- Click Review and then Create User. You’re done with the IAM part!
📦 Install AWS CLI (Ubuntu/Linux)
If you're using Ubuntu (amd64), you can install the AWS CLI by running these commands in your terminal:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
If you're using a different operating system (like macOS or Windows), just head over to the official install guide here:
👉 AWS CLI Installation Guide
🔑 Generate Access Keys & Configure AWS CLI
Go back to the IAM dashboard and click on your new user (
three-tier-user
).Under the Security Credentials tab, click on Create Access Key.
Choose Command Line Interface (CLI) as the use case, agree to the terms, and proceed.
Once the keys are generated, copy the Access Key ID and Secret Access Key (you’ll need them right away!).
Now, go to your terminal and configure the AWS CLI:
aws configure
It will prompt you to enter:
Access Key ID
Secret Access Key
Default region name: You can use
us-east-1
for this demoDefault output format: Enter
json
That’s it! Your AWS CLI is now set up and ready to communicate with your AWS account 🚀
🛠️ Step 2: Install Terraform and Set Up Remote Backend
Now that our AWS CLI is ready and configured, let’s install Terraform, our Infrastructure as Code (IaC) tool of choice for this project. We’ll also set up a secure and scalable way to store our Terraform state using an S3 bucket.
📥 Installing Terraform on Ubuntu (amd64)
If you're using Ubuntu on an amd64 system, follow these commands to install Terraform:
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(grep -oP '(?<=UBUNTU_CODENAME=).*' /etc/os-release || lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update
sudo apt-get install terraform
✅ After this, you can verify the installation with:
terraform -v
🖥️ If you're on a different operating system or architecture, follow the official installation guide here:
👉 Terraform Install Guide
🔐 AWS CLI + Terraform: Working Together
Since we’ve already configured the AWS CLI, Terraform will automatically use the credentials (access key & secret key) stored by aws configure
. This means you’re ready to provision AWS resources securely and seamlessly.
☁️ Best Practice: Use Remote Backend for Terraform State
Terraform tracks the state of your infrastructure in a file called terraform.tfstate
. By default, it’s stored locally, but that’s risky and not scalable. So, we’ll follow best practices and store this file remotely in an S3 bucket.
Here’s how to create an S3 bucket to act as your Terraform backend:
🪣 Create an S3 Bucket for State Storage
aws s3api create-bucket \
--bucket pravesh-terra-state-bucket \
--region us-east-1
📜 Enable Versioning for State History
aws s3api put-bucket-versioning \
--bucket pravesh-terra-state-bucket \
--versioning-configuration Status=Enabled
🔐 Enable Default Encryption
aws s3api put-bucket-encryption \
--bucket pravesh-terra-state-bucket \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}]
}'
And that’s it! You now have a secure, versioned, and encrypted S3 bucket ready to store your Terraform state files — a key step toward building a production-grade infrastructure.
📦 Step 3: Clone the Project and Provision Infrastructure with Terraform
With all the groundwork done — AWS CLI set up, Terraform installed, and the backend ready — it’s time to move on to the actual project!
The codebase for our three-tier application is available on my GitHub repository:
👉 GitHub Repo: https://github.com/Pravesh-Sudha/3-tier-app-Deployment
🚀 Clone the Repository
To get started, open your terminal and run the following commands:
git clone https://github.com/Pravesh-Sudha/3-tier-app-Deployment
cd 3-tier-app-Deployment/
Inside the cloned repo, you'll find a folder named terra-config/
. That’s where all the Terraform magic happens. Navigate into that directory:
cd terra-config/
Now initialize the Terraform backend (which we configured to use your S3 bucket earlier):
terraform init
This will configure Terraform to use the remote backend for storing the state file. If your bucket name is different from mine (pravesh-terra-state-bucket
), make sure to update the name in backend.tf
.
📁 Understanding the Terraform Code Structure
Instead of dumping everything into a single main.tf
file, I’ve broken the configuration into logical modules for clarity and scalability. Here’s a quick overview:
provider.tf
: Specifies the cloud provider. In our case, it’s AWS (no surprise there!).backend.tf
: Configures Terraform to store state remotely in our S3 bucket.ecr.tf
: Creates two public repositories in ECR:3-tier-frontend
and3-tier-backend
for storing Docker images.vpc.tf
: Fetches the default VPC and subnet details.role.tf
: Defines IAM roles:One for the EKS cluster (includes
AmazonEKSClusterPolicy
)One for the Node Group (includes policies like
AmazonEKSWorkerNodePolicy
,AmazonEC2ContainerRegistryReadOnly
, andAmazonEKS_CNI_Policy
)
eks.tf
: Provisions the EKS cluster namedThree-tier-cloud
.node_group.tf
: Creates the worker node group for the cluster with onet2.medium
EC2 instance.
⏳ Apply the Terraform Configuration
Now we’re ready to provision the infrastructure! Run the following command:
terraform apply --auto-approve
⏱️ This might take 15–20 minutes, especially since provisioning EKS clusters and node groups can take some time. Be patient — AWS is building your cloud infrastructure behind the scenes.
🐳 Push Docker Images to ECR
Once the infrastructure is up, it’s time to push our Docker images for the frontend and backend to AWS ECR.
Go to your AWS Console > ECR > Repositories
Click on the
3-tier-frontend
repositoryClick on “View push commands” — AWS will show you four CLI commands
Now, go to the frontend/
folder in your project directory:
cd ../frontend/
Run each of the four commands one by one to build the image and push it to ECR.
Repeat the same steps for the 3-tier-backend
repository:
Go back to ECR > Repositories
Select
3-tier-backend
and click View push commandsNavigate to the backend directory:
cd ../backend/
Run the ECR commands provided to push the backend Docker image.
🎉 Once done, your container images will be hosted in your private AWS ECR repositories — ready to be deployed to your EKS cluster!
🌐 Step 4: Deploy to EKS with kubectl and Set Up Ingress via ALB
Now that your EKS cluster and ECR repositories are ready, it’s time to interact with the cluster, deploy your workloads, and expose your application to the internet. We'll use kubectl for that — the command-line tool to manage Kubernetes clusters.
🧰 Install kubectl
If you're using Ubuntu on amd64, run the following to install kubectl
:
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
kubectl version --short --client
If you’re using a different OS/architecture, install it using the official instructions:
👉 kubectl Install Guide
🔧 Connect kubectl
to Your EKS Cluster
Now configure kubectl
to use your EKS cluster:
aws eks update-kubeconfig --region us-east-1 --name Three-tier-cloud
This updates your ~/.kube/config
file so that you can interact with your new EKS cluster using kubectl
.
📁 Update Kubernetes Manifests
Inside the repo directory 3-tier-app-Deployment/k8s_manifests/
, you’ll find the Kubernetes manifests for deploying the frontend, backend, and MongoDB services.
Before applying them, update the image URIs in both deployment files with the correct values from ECR.
🔄 Update backend_deployment.yml
:
Find this block:
spec:
containers:
- name: backend
image: <YOUR_IMAGE_URI>
imagePullPolicy: Always
Replace <YOUR_IMAGE_URI>
with the full image URL from your three-tier-backend ECR repo (latest
tag).
🔄 Update frontend_deployment.yml
:
Do the same in the frontend manifest with the image URI from the three-tier-frontend ECR repo.
🧱 Create a Namespace for the App
Let’s keep things clean by isolating our app into a dedicated Kubernetes namespace:
kubectl create namespace workshop
kubectl config set-context --current --namespace workshop
🚀 Deploy the App Components
Apply the deployment and service files for each component:
kubectl apply -f frontend-deployment.yaml -f frontend-service.yaml
kubectl apply -f backend-deployment.yaml -f backend-service.yaml
# Deploy MongoDB
cd mongo/
kubectl apply -f .
At this point, your services are up and running within the cluster — but we still need a way to expose them to the outside world.
🌍 Set Up Application Load Balancer (ALB) and Ingress
To route external traffic into your Kubernetes services, we’ll use an AWS Application Load Balancer along with an Ingress Controller.
📜 Create an IAM Policy for the Load Balancer
The IAM policy json is present inside the kubernetes manifests dir:
cd k8s_manifests/
Create the IAM policy in AWS:
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
🔒 Associate OIDC Provider with EKS
To enable IAM roles for Kubernetes service accounts, associate an OIDC provider with your EKS cluster.
First, install eksctl
:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
Then associate the OIDC provider:
eksctl utils associate-iam-oidc-provider \
--region=us-east-1 \
--cluster=Three-tier-cloud \
--approve
🔗 Create a Service Account for the Load Balancer
Replace <Your-Account-Number>
with your actual AWS account ID and run:
eksctl create iamserviceaccount \
--cluster=Three-tier-cloud \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn=arn:aws:iam::<Your-Account-Number>:policy/AWSLoadBalancerControllerIAMPolicy \
--approve \
--region=us-east-1
🧰 Install Helm and Deploy the Load Balancer Controller
We’ll use Helm to install the AWS Load Balancer Controller:
sudo snap install helm --classic
helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=Three-tier-cloud \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
Check if it’s running:
kubectl get deployment -n kube-system aws-load-balancer-controller
🛣️ Apply Ingress Configuration
Now go back to the k8s_manifests/
directory and apply the ingress resource:
kubectl apply -f full_stack_lb.yaml
Wait for 5–7 minutes to allow the ingress and ALB to be fully provisioned.
🌐 Access Your Application
To get the ALB endpoint:
kubectl get ing -n workshop
You’ll see an ADDRESS field in the output. Copy that URL, paste it in your browser, and voilà 🎉 — your three-tier application is live on AWS!
🧹 Step 5: Clean Up AWS Resources
Congratulations on successfully deploying your three-tier application on AWS EKS using Terraform! 🎉
Before we wrap things up, it’s important to clean up the resources we created — to avoid any unexpected AWS charges.
🗑️ Delete Docker Images from ECR
Head over to the ECR dashboard in the AWS Console.
Under Private Repositories, select both
three-tier-backend
andthree-tier-frontend
.Delete the images from each repository.
💣 Destroy Infrastructure with Terraform
Now let’s destroy the entire infrastructure from your terminal. Navigate to the terra-config/
directory and run:
terraform destroy --auto-approve
Terraform will tear down the EKS cluster, node group, IAM roles, VPC config, ECR repositories, and more.
🧽 Delete Terraform State File and S3 Bucket
After destroying your resources, don’t forget to remove the Terraform state file and the bucket itself:
aws s3 rm s3://pravesh-terra-state-bucket/eks/terraform.tfstate
Then go to the S3 Dashboard, empty the bucket manually (if needed), and delete the bucket to finish the cleanup process.
⚠️ Make sure to delete the bucket, otherwise it will incur unwanted charges.
✅ Conclusion: What You’ve Learned
And that’s a wrap! 🚀
In this project, you’ve gone through the complete lifecycle of deploying a real-world three-tier application using modern DevOps tools and cloud infrastructure:
You learned how to use Terraform to provision infrastructure as code.
You created and managed AWS resources like EKS, ECR, IAM, and S3.
You containerized applications and deployed them with Kubernetes.
You exposed your app to the internet using an Application Load Balancer and Ingress.
And finally, you followed best practices like remote state management and safe resource cleanup.
This project isn't just a demo — it’s a strong foundation you can build on for production-grade cloud-native applications.
If this blog helped you, consider sharing it with others or giving the GitHub repo a star ⭐!
💬 Have questions, suggestions, or want to collaborate?
Reach out to me on Twitter, LinkedIn, or explore more on blog.praveshsudha.com