<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Pravesh Sudha]]></title><description><![CDATA[Pravesh Sudha]]></description><link>https://blog.praveshsudha.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 04:15:33 GMT</lastBuildDate><atom:link href="https://blog.praveshsudha.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><atom:link rel="first" href="https://blog.praveshsudha.com/rss.xml"/><atom:link rel="next" href="https://blog.praveshsudha.com/rss.xml?after=Njg3ZjRhZmVmZGM1ODI1ZjNmOWU1NGE1XzIwMjUtMDctMjJUMDg6MjU6MzQuNzE1Wg=="/><item><title><![CDATA[Kubernetes for Beginners: Deploying an Nginx–Node–Redis Application]]></title><description><![CDATA[<p>Hola Amigos! 👋</p>
<p>Today, we are embarking on a brand new series: <strong>K8s with Pravesh</strong> 🚀  where well break down Kubernetes, understand what it really is, and more importantly, how you can <em>actually</em> use it in a practical, no-BS way.</p>
<p>In todays blog, well dive into the fundamentals  <strong>Deployments, Services, and ConfigMaps</strong>  and use them to deploy a <strong>three-tier application on Minikube</strong>.</p>
<p>Now you might be thinking <em>Whats new here? There are already thousands of blogs doing the same thing.</em></p>
<p>And honestly, youre not wrong.</p>
<p>But hold your horses for a second 🐎</p>
<p>This isnt just another apply this YAML and it works kind of tutorial. Were going to:</p>
<ul>
<li><p>Understand <strong>whats really happening under the hood</strong></p>
</li>
<li><p>Debug real issues (yes, the ones that <em>actually</em> happen)</p>
</li>
<li><p>And build intuition so you dont just run Kubernetes you <strong>get it</strong></p>
</li>
</ul>
<p>So lets dive in. 🔥</p>
<hr />
<h2>🛠 Pre-Requisites</h2>
<p>Before we dive deep, there are a couple of things you need to have set up. Nothing fancy  just the essentials to get your Kubernetes playground up and running.</p>
<h3>🔹 Docker / Docker Desktop</h3>
<p>Well be running Minikube using Docker, so make sure you have Docker installed on your system.</p>
<p>👉 Install it from here: <a href="https://docs.docker.com/get-started/get-docker/">https://docs.docker.com/get-started/get-docker/</a></p>
<h3>🔹 Minikube</h3>
<p>Think of Minikube as your <strong>personal Kubernetes cluster</strong>  lightweight, local, and perfect for experimenting and learning all the cool stuff without needing a cloud setup.</p>
<p>👉 Download it from here: <a href="https://minikube.sigs.k8s.io/docs/start/?arch=%2Fmacos%2Farm64%2Fstable%2Fbinary+download">https://minikube.sigs.k8s.io/docs/start/?arch=%2Fmacos%2Farm64%2Fstable%2Fbinary+download</a></p>
<hr />
<h2>🎥 Practical Demonstration</h2>
<p><a class="embed-card" href="https://youtu.be/ZYlRwMf4lYA">https://youtu.be/ZYlRwMf4lYA</a></p>

<h2>🤔 What is Kubernetes (K8s)?</h2>
<p>At its core, <strong>Kubernetes is a container orchestration tool</strong>.</p>
<p>Now that sounds fancy, but lets simplify it a bit.</p>
<p>Think of Kubernetes as a <strong>Head Chef in a restaurant</strong> 👨🍳 It makes sure:</p>
<ul>
<li><p>Everyone is doing their job properly</p>
</li>
<li><p>Work is flowing smoothly</p>
</li>
<li><p>And if something breaks it steps in and fixes it</p>
</li>
</ul>
<p>Thats the <em>layman definition</em>.</p>
<h3>The Real Meaning</h3>
<p>In technical terms, Kubernetes is responsible for:</p>
<ul>
<li><p>Managing containers</p>
</li>
<li><p>Scaling them</p>
</li>
<li><p>Ensuring they are always running</p>
</li>
<li><p>Handling communication between them</p>
</li>
</ul>
<p>You can think of it as an advanced version of Docker Compose  but built for <strong>production-grade systems</strong>.</p>
<h3>The Smallest Unit: Pod</h3>
<p>In Kubernetes, the smallest deployable unit is a <strong>Pod</strong>.</p>
<p>👉 A Pod is basically a wrapper around your container(s)</p>
<ul>
<li><p>It can run <strong>one or more containers</strong></p>
</li>
<li><p>These containers share:</p>
<ul>
<li><p>Network</p>
</li>
<li><p>Storage</p>
</li>
</ul>
</li>
</ul>
<p>But heres the thing</p>
<p>Managing Pods manually? 😵💫 Not a great idea.</p>
<h3>Enter Deployments</h3>
<p>To solve that, we have <strong>Deployments</strong>.</p>
<p>A Deployment is like a <strong>blueprint for your Pods</strong>.</p>
<p>You define:</p>
<ul>
<li><p>Container image</p>
</li>
<li><p>Number of replicas</p>
</li>
<li><p>Ports</p>
</li>
<li><p>Volumes</p>
</li>
<li><p>Other configurations</p>
</li>
</ul>
<p>And Kubernetes takes care of:</p>
<ul>
<li><p>Creating Pods</p>
</li>
<li><p>Scaling them</p>
</li>
<li><p>Replacing them if they crash</p>
</li>
</ul>
<p>💥 Much easier to manage.</p>
<h3>How Do Pods Talk to Each Other?</h3>
<p>Back to our restaurant analogy 🍽</p>
<p>The waiter needs to communicate with the chef, right?</p>
<p>But in Kubernetes 👉 Pods dont automatically talk to each other</p>
<p>We need something in between.</p>
<h3>Services: The Communication Bridge</h3>
<p><strong>Services</strong> act as a bridge between Pods.</p>
<p>They provide:</p>
<ul>
<li><p>Stable networking</p>
</li>
<li><p>Internal DNS</p>
</li>
<li><p>Load balancing</p>
</li>
</ul>
<p>There are 3 main types:</p>
<h3>🔹 ClusterIP</h3>
<ul>
<li><p>Default type</p>
</li>
<li><p>Used for <strong>internal communication only</strong></p>
</li>
<li><p>Not accessible from outside the cluster</p>
</li>
</ul>
<h3>🔹 NodePort</h3>
<ul>
<li><p>Exposes the service on a <strong>specific port on the node</strong></p>
</li>
<li><p>Accessible from outside using:</p>
<pre><code class="language-plaintext">&lt;Node-IP&gt;:&lt;Port&gt;
</code></pre>
</li>
</ul>
<h3>🔹 LoadBalancer</h3>
<ul>
<li><p>Exposes the app to the <strong>outside world</strong></p>
</li>
<li><p>Commonly used in cloud environments (AWS, GCP, etc.)</p>
</li>
</ul>
<h3>ConfigMaps: Handling Custom Configurations</h3>
<p>Back to the restaurant</p>
<p>Imagine a customer walks in and says:</p>
<blockquote>
<p><em>I want a Caff macchiato, with a little bit of soy, enough to make me go OH BOY!</em>  Kevin Hart fans, you know 😄</p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/4c20c5f7-5519-4f3f-919d-56a91a8fb951.gif" alt="" style="display:block;margin:0 auto" />

<p>Handling custom requests manually can get messy</p>
<p>But in Kubernetes, we have <strong>ConfigMaps</strong> for this.</p>
<p>👉 ConfigMaps allow you to:</p>
<ul>
<li><p>Store <strong>non-confidential data</strong></p>
</li>
<li><p>Use it inside your applications</p>
</li>
<li><p>Keep configs separate from your code</p>
</li>
</ul>
<p>For sensitive data? 👉 Use <strong>Secrets</strong></p>
<h3>YAML: The Language of Kubernetes</h3>
<p>All resources in Kubernetes are defined using <strong>YAML files</strong>.</p>
<p>You describe:</p>
<ul>
<li><p>What you want</p>
</li>
<li><p>And Kubernetes makes it happen</p>
</li>
</ul>
<p>If you want to explore more, check out the official docs: 👉 <a href="https://kubernetes.io/docs/setup/">https://kubernetes.io/docs/setup/</a></p>
<hr />
<h2> Practical Demonstration</h2>
<p>Enough with the theory  now lets get our hands dirty 🔥</p>
<p>So far, weve covered:</p>
<ul>
<li><p>Deployments</p>
</li>
<li><p>Services</p>
</li>
<li><p>ConfigMaps</p>
</li>
</ul>
<p>And to bring all of this together, well deploy a <strong>three-tier application (NginxNodeRedis)</strong>.</p>
<p>Ive actually used this same app in one of my earlier projects to demonstrate CI/CD workflows with GitHub Actions and Terraform. If youre curious, check it out here: 👉 <a href="https://blog.praveshsudha.com/cicd-for-terraform-with-github-actions-deploying-a-nodejs-redis-app-on-aws">https://blog.praveshsudha.com/cicd-for-terraform-with-github-actions-deploying-a-nodejs-redis-app-on-aws</a></p>
<h3>Step 1: Clone the Project</h3>
<pre><code class="language-bash">git clone https://github.com/Pravesh-Sudha/nginx-node-redis.git
</code></pre>
<p>Open the project in your favorite editor (VS Code works great).</p>
<h3>Understanding the App</h3>
<p>This is a simple Node.js application that:</p>
<ul>
<li><p>Displays a <strong>request counter</strong></p>
</li>
<li><p>Increments the count on every refresh</p>
</li>
<li><p>Stores data in <strong>Redis</strong></p>
</li>
<li><p>Uses <strong>Nginx as a reverse proxy</strong> (serving on port 80 instead of 5000)</p>
</li>
</ul>
<h3>Step 2: Run with Docker Compose</h3>
<p>Before jumping into Kubernetes, lets run it locally:</p>
<pre><code class="language-bash">docker-compose up --build
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/70b43002-4628-47c2-8f93-58192a60510c.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>Make sure Docker Desktop is installed and running.</p>
</blockquote>
<p>You should see logs in your terminal and the app running in your browser.</p>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/43433727-fa8d-40a1-8bf3-933732e0785b.png" alt="" style="display:block;margin:0 auto" />

<p>Once done:</p>
<pre><code class="language-bash">Ctrl + C
</code></pre>
<hr />
<h3>Step 3: Move to Kubernetes</h3>
<p>Now comes the interesting part.</p>
<p>Inside the project:</p>
<pre><code class="language-bash">cd nginx-node-redis/kube-config/
</code></pre>
<p>Youll find three directories:</p>
<ul>
<li><p><code>nginx/</code></p>
</li>
<li><p><code>node/</code></p>
</li>
<li><p><code>redis/</code></p>
</li>
</ul>
<p>Each contains:</p>
<ul>
<li><p>Deployment YAML</p>
</li>
<li><p>Service YAML</p>
</li>
</ul>
<h3>📦 Nginx Deployment</h3>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

        volumeMounts:
        - name: nginx-config-volume
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
          
      volumes:
      - name: nginx-config-volume
        configMap:
          name: nginx-config
</code></pre>
<h3>A Note on AI &amp; YAML</h3>
<p>The best thing about AI? 👉 You can generate YAML files instantly.</p>
<p>But what happens when things break?</p>
<p>Thats where <strong>fundamentals matter</strong>.</p>
<p>Lets break this down 👇</p>
<h3>Understanding the Deployment</h3>
<h3>1. API Version &amp; Kind</h3>
<p>Defines what resource we are creating:</p>
<pre><code class="language-yaml">kind: Deployment
</code></pre>
<h3>2. Labels (IMPORTANT)</h3>
<p>Labels appear in three places  and each has a role:</p>
<ul>
<li><p><strong>metadata.labels</strong>  tagging the Deployment</p>
</li>
<li><p><strong>spec.selector.matchLabels</strong>  tells Deployment which Pods to manage</p>
</li>
<li><p><strong>template.metadata.labels</strong>  applied to Pods (used by Services)</p>
</li>
</ul>
<p>👉 This is how Kubernetes connects resources.</p>
<h3>3. Container Spec</h3>
<pre><code class="language-yaml">image: nginx:1.14.2
ports:
  - containerPort: 80
</code></pre>
<p>Defines:</p>
<ul>
<li><p>Image</p>
</li>
<li><p>Port</p>
</li>
</ul>
<h3>4. ConfigMap Mount</h3>
<pre><code class="language-yaml">volumeMounts:
- name: nginx-config-volume
  mountPath: /etc/nginx/nginx.conf
  subPath: nginx.conf
</code></pre>
<p>👉 This mounts your custom Nginx config into the container.</p>
<h3>🌐 Nginx Service</h3>
<pre><code class="language-yaml">apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: ClusterIP
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
</code></pre>
<p>Here:</p>
<ul>
<li><p>We use <strong>ClusterIP</strong></p>
</li>
<li><p>Selector matches:</p>
<pre><code class="language-yaml">app: nginx
</code></pre>
</li>
</ul>
<p>👉 This connects the Service to Pods.</p>
<h2> Nginx ConfigMap</h2>
<pre><code class="language-yaml">apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  nginx.conf: |
    events {}

    http {
      upstream loadbalancer {
        server node-service:5000;
      }

      server {
        listen 80;

        location / {
          proxy_pass http://loadbalancer;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        location = /favicon.ico {
          log_not_found off;
          access_log off;
        }
      }
    }
</code></pre>
<p>👉 Here we:</p>
<ul>
<li><p>Override default Nginx config</p>
</li>
<li><p>Route traffic to:</p>
<pre><code class="language-bash">node-service:5000
</code></pre>
</li>
</ul>
<h3>Step 4: Deploy to Minikube</h3>
<pre><code class="language-bash"># Start Minikube
minikube start

# Go to config directory
cd nginx-node-redis/kube-config/

# Deploy Redis
cd redis/ &amp;&amp; kubectl apply -f deploy.yml -f svc.yml
cd ..

# Deploy Node
cd node/ &amp;&amp; kubectl apply -f deploy.yaml -f svc.yml
cd ..

# Deploy Nginx
cd nginx/ &amp;&amp; kubectl apply -f deploy.yml -f svc.yml -f configmap.yaml
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/6e9932c1-21cb-4f1d-b6ec-a4f4549fed13.png" alt="" style="display:block;margin:0 auto" />

<h3>Wait for Pods</h3>
<pre><code class="language-bash">kubectl get pods -w
</code></pre>
<p>Wait until all pods are:</p>
<pre><code class="language-bash">Running
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/dba8e013-baa4-4798-83a6-9c30a170ff51.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>Access the App</h2>
<pre><code class="language-bash">minikube service nginx-service
</code></pre>
<p>👉 This opens your app in the browser  now running on Kubernetes 🎉</p>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/713b9fbf-bfa3-4432-9643-635bc0a5bb74.png" alt="" style="display:block;margin:0 auto" />

<h3>Self-Healing in Action</h3>
<p>Heres where Kubernetes shines.</p>
<p>Lets break something 😈</p>
<pre><code class="language-bash">kubectl delete pod &lt;pod-name&gt;
</code></pre>
<p>Now check:</p>
<pre><code class="language-bash">kubectl get pods
</code></pre>
<p>👉 Youll see:</p>
<ul>
<li>A new pod automatically created</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/4f34aded-cd2d-4ad8-b5be-79d496653ef7.png" alt="" style="display:block;margin:0 auto" />

<h3>🧠 What just happened?</h3>
<p>Kubernetes ensures:</p>
<blockquote>
<p>Actual state = Desired state</p>
</blockquote>
<p>Even if you:</p>
<ul>
<li><p>Delete a pod</p>
</li>
<li><p>Crash a container</p>
</li>
</ul>
<p>👉 Kubernetes will bring it back</p>
<hr />
<h2>🔍 Whats Happening Under the Hood?</h2>
<p>Now that everything is up and running, lets take a step back and understand <strong>how things are actually working behind the scenes</strong> 👇</p>
<h3>1. Accessing the Application</h3>
<p>When you run:</p>
<pre><code class="language-bash">minikube service nginx-service
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/7ca3e0e2-fd44-4f8b-a0ca-1ef5c896d17b.png" alt="" style="display:block;margin:0 auto" />

<p>👉 Minikube exposes your service and gives you a <strong>URL with a port</strong>.</p>
<h3>2. Request Hits Nginx Service</h3>
<p>Once you hit that URL:</p>
<ul>
<li><p>The <strong>Nginx Service</strong> receives the request</p>
</li>
<li><p>It looks at its selector:</p>
<pre><code class="language-yaml">app: nginx
</code></pre>
</li>
<li><p>And forwards the request to all matching <strong>Nginx Pods</strong></p>
</li>
</ul>
<h3>3. Inside the Nginx Pod</h3>
<p>Inside the pod:</p>
<ul>
<li><p>Nginx uses the <strong>custom config (via ConfigMap)</strong></p>
</li>
<li><p>The request is proxied to:</p>
<pre><code class="language-bash">node-service:5000
</code></pre>
</li>
</ul>
<h3>4. Node Service Load Balancing</h3>
<p>Now the interesting part 👀</p>
<ul>
<li><p><code>node-service</code> is a <strong>ClusterIP Service</strong></p>
</li>
<li><p>It has multiple pods (replicas = 3)</p>
</li>
</ul>
<p>👉 Kubernetes automatically distributes traffic:</p>
<pre><code class="language-text">node-service
   
 
  node-pod1 node-pod2 node-pod3
 
</code></pre>
<h3>5. Node App Talks to Redis</h3>
<p>Inside your Node app:</p>
<ul>
<li><p>It connects to:</p>
<pre><code class="language-bash">redis-service
</code></pre>
</li>
<li><p>Stores:</p>
<ul>
<li><p>Request count</p>
</li>
<li><p>Cache data</p>
</li>
</ul>
</li>
</ul>
<h3>6. Response Flow</h3>
<p>Finally, the response travels back:</p>
<pre><code class="language-text">Redis  Node  Nginx  Browser
</code></pre>
<p>🎉 And you see the updated request count</p>
<h3>🧠 Key Insight</h3>
<p>Notice something important here</p>
<p>👉 We never used a single IP address.</p>
<p>Everything works using:</p>
<ul>
<li><p><strong>Service names</strong></p>
</li>
<li><p><strong>Internal DNS</strong></p>
</li>
<li><p><strong>Labels &amp; selectors</strong></p>
</li>
</ul>
<p>This is called <strong>Service Discovery</strong>  one of the most powerful features of Kubernetes.</p>
<h3>Scaling Made Easy</h3>
<p>Want more traffic handling capacity?</p>
<p>Just update:</p>
<pre><code class="language-yaml">replicas: 3
</code></pre>
<p>👉 Increase or decrease as needed</p>
<p>👉 No changes required anywhere else</p>
<p>Kubernetes handles the rest</p>
<h3>Cleanup</h3>
<p>Once youre done experimenting, you can delete the cluster:</p>
<pre><code class="language-bash">minikube delete
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/13bcdfc5-1020-43f4-b080-ffed309af4c2.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🎯 Conclusion</h2>
<p>And thats a wrap for this one! 🚀</p>
<p>In this blog, we didnt just deploy an application on Kubernetes  we actually <strong>understood whats happening behind the scenes</strong>. From Deployments and Services to ConfigMaps and internal service discovery, you now have a solid foundation to start building real-world K8s projects.</p>
<p>More importantly, you saw how:</p>
<ul>
<li><p>Kubernetes replaces static setups like Docker Compose with <strong>dynamic, scalable systems</strong></p>
</li>
<li><p>Services enable seamless communication without worrying about IPs</p>
</li>
<li><p>And how the system <strong>self-heals</strong> to match the desired state</p>
</li>
</ul>
<p>This is just the beginning of the <strong>K8s with Pravesh</strong> series. In the upcoming blogs, well go deeper into more advanced concepts and build even more powerful systems 💥</p>
<h3>🔗 Lets Connect</h3>
<p>If you found this helpful, feel free to connect with me and follow along for more DevOps and Kubernetes content:</p>
<ul>
<li><p>💼 LinkedIn: <a href="https://www.linkedin.com/in/pravesh-sudha">https://www.linkedin.com/in/pravesh-sudha</a></p>
</li>
<li><p>📝 Blog: <a href="https://blog.praveshsudha.com">https://blog.praveshsudha.com</a></p>
</li>
<li><p>💻 GitHub: <a href="https://github.com/Pravesh-Sudha">https://github.com/Pravesh-Sudha</a></p>
</li>
</ul>
<p>If you have any questions, got stuck somewhere, or just want to discuss ideas  my DMs are always open 🙌</p>
<p>Until next time Keep building, keep learning, and keep shipping 🚀</p>
]]></description><link>https://blog.praveshsudha.com/kubernetes-for-beginners-deploying-an-nginx-node-redis-application</link><guid isPermaLink="true">https://blog.praveshsudha.com/kubernetes-for-beginners-deploying-an-nginx-node-redis-application</guid><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[🚀 Building an AI-Powered CI/CD Copilot with Jenkins and AWS Lambda]]></title><description><![CDATA[<h2>💡 Introduction</h2>
<p>Hey folks, welcome to the world of Agentic Tools and DevOps.</p>
<p>Today, were diving into CI/CD pipelines and exploring how we can debug them efficiently and almost instantly using AI. In this project, well build an AI-powered CI/CD Copilot where<strong>AWS Lambda</strong>serves as the core logic layer. This Lambda function will interact with the Google Gemini API to analyze pipeline failures and help us debug them intelligently.</p>
<p>The goal of this project is not just to integrate AI into a CI/CD workflow, but to help you understand how to build your own AI agent from scratch  one that can assist in real-world DevOps scenarios.</p>
<p>So, without further ado, lets get started.</p>
<hr />
<h2>💡 Prerequisites</h2>
<p>Before we begin, make sure you have the following requirements in place:</p>
<ul>
<li><p><strong>Docker &amp; Docker Hub account</strong><br />We will run parts of this project inside Docker containers. Later, well push our custom image to Docker Hub, so make sure you have both Docker installed and a Docker Hub account ready.</p>
</li>
<li><p><strong>Jenkins (Our CI/CD Tool)</strong><br />Well use Jenkins for demonstration purposes. You can either:</p>
<ul>
<li><p>Run Jenkins as a Docker container, or</p>
</li>
<li><p>Install it directly from the official website.</p>
</li>
</ul>
</li>
<li><p><strong>Terraform</strong><br />We will provision our infrastructure  including the Gemini API key (stored securely) and the AWS Lambda function  using Terraform.</p>
<p>Make sure:</p>
<ul>
<li><p>Terraform CLI is installed</p>
</li>
<li><p>Your AWS credentials are configured</p>
</li>
<li><p>The IAM user has permissions for<strong>AWS Lambda</strong>and<strong>AWS Secrets Manager</strong></p>
</li>
</ul>
<p>If youre new to Terraform setup, you can follow this guide:<br />👉<a href="https://blog.praveshsudha.com/getting-started-with-terraform-a-beginners-guide#heading-step-1-install-the-aws-cli">https://blog.praveshsudha.com/getting-started-with-terraform-a-beginners-guide#heading-step-1-install-the-aws-cli</a></p>
</li>
</ul>
<h2>🎥 Youtube Demonstration</h2>
<iframe></iframe>

<hr />
<h2>💡 How It Works</h2>
<p>The complete source code for this project is available in this GitHub repository:<br />👉<a href="https://github.com/Pravesh-Sudha/ai-devops-agent">https://github.com/Pravesh-Sudha/ai-devops-agent</a></p>
<p>Navigate to the<code>cicd-copilot</code>directory to follow along.</p>
<p>If youve been following my work, you might recognize this project. I originally used this same<strong>Node.js Book Reader application</strong>to demonstrate how Docker works with Node.js. For this AI-powered CI/CD Copilot, Ive made specific modifications  particularly in the<strong>Jenkinsfile</strong>and the<code>terra-config</code>directory.</p>
<p>Inside the<code>terra-config</code>directory, youll find:</p>
<ul>
<li><p><strong>main.tf</strong> Provisions:</p>
<ul>
<li><p>AWS Lambda function</p>
</li>
<li><p>AWS Secrets Manager secret (to securely store the Gemini API key)</p>
</li>
</ul>
</li>
<li><p><strong>lambda.zip</strong> The packaged Lambda deployment artifact (zipped<code>lambda_function.py</code>)</p>
</li>
<li><p><strong>lambda_function.py</strong> The core of this project.<br />This file contains the AI agent logic and the structured prompt sent to the Gemini API.</p>
</li>
<li><p><strong>iam.tf</strong> Defines the IAM roles and permissions required for:</p>
<ul>
<li><p>AWS Lambda</p>
</li>
<li><p>AWS Secrets Manager</p>
</li>
</ul>
</li>
</ul>
<h3>Architecture Overview</h3>
<p>The core idea behind this project is simple:</p>
<ol>
<li><p>Jenkins detects a pipeline failure.</p>
</li>
<li><p>It collects contextual information (stage name, build ID, logs).</p>
</li>
<li><p>It sends that data to AWS Lambda.</p>
</li>
<li><p>Lambda calls the Gemini API.</p>
</li>
<li><p>Gemini analyzes the logs and returns structured debugging insights.</p>
</li>
</ol>
<h3>Payload Sent to Lambda</h3>
<p>The Lambda function expects a JSON payload in the following format:</p>
<pre><code class="language-python">{
   stage: $stage,        # Name of the stage where the pipeline failed
   job: $job,            # Job name (e.g., cicd-copilot)
   build_id: $build_id,  # Build ID number (e.g., 1, 2, 3)
   logs: $logs           # Last 200 lines of failure logs
}
</code></pre>
<p>This structured input allows the AI agent to understand the pipeline context before analyzing the logs.</p>
<h3>Prompt Sent to Gemini API</h3>
<p>Inside the Lambda function, we make a POST request to the Gemini API with the following structured prompt:</p>
<pre><code class="language-python">You are a senior CI/CD Copilot specialized in Jenkins pipelines.

Pipeline context:
- Stage name: {stage}
- Expected outcome: Build an artifact usable by later stages

Your tasks:
1. Identify the failure category (build / runtime / config / infra / dependency / auth / unknown)
2. Identify the most likely root cause
3. Provide actionable fixes
4. Suggest a patch ONLY if clearly inferable

Respond ONLY in valid JSON with this schema:
{{
  "failure_category": "",
  "root_cause": "",
  "actionable_fixes": [],
  "suggested_patch": {{
    "file": "",
    "line": "",
    "fix": ""
  }}
}}

Logs:
{logs}
</code></pre>
<p>The prompt dynamically injects two key variables:</p>
<ul>
<li><p><code>{stage}</code> The pipeline stage name</p>
</li>
<li><p><code>{logs}</code> The failure logs</p>
</li>
</ul>
<p>If youd like to explore the full Lambda implementation, you can view it here:<br />👉<a href="https://github.com/Pravesh-Sudha/ai-devops-agent/blob/main/cicd-copilot/terra-config/lambda_function.py">https://github.com/Pravesh-Sudha/ai-devops-agent/blob/main/cicd-copilot/terra-config/lambda_function.py</a></p>
<h3>How It Integrates with Jenkins</h3>
<p>You might be wondering  how exactly does this connect with Jenkins?</p>
<p>Inside the<code>Jenkinsfile</code>, each stage:</p>
<ul>
<li><p>Sets an environment variable for the stage name.</p>
</li>
<li><p>Redirects command output (in case of failure) into a<code>LOG_FILE</code>.</p>
</li>
</ul>
<p>If any stage fails:</p>
<ul>
<li><p>The<code>post { failure { ... } }</code>block is triggered.</p>
</li>
<li><p>Jenkins constructs the JSON payload.</p>
</li>
<li><p>It invokes the AWS Lambda function.</p>
</li>
<li><p>The AI-generated failure analysis is printed directly into the Jenkins console output.</p>
</li>
</ul>
<p>This gives you instant, structured debugging assistance right inside your CI/CD pipeline.</p>
<h3>How to Integrate This in Your Own Workspace</h3>
<p>To replicate this approach in your own pipeline:</p>
<ol>
<li><p>Append log redirection to each command:</p>
<pre><code class="language-bash">${LOG_FILE} 2&gt;&amp;1
</code></pre>
</li>
<li><p>Define an environment variable for the stage name.</p>
</li>
<li><p>Provision:</p>
<ul>
<li><p>AWS Lambda</p>
</li>
<li><p>IAM roles</p>
</li>
<li><p>Secrets Manager (for the Gemini API key)<br />using Terraform.</p>
</li>
</ul>
</li>
<li><p>Add a<code>post failure</code>block in your Jenkinsfile to invoke the Lambda function with the structured JSON payload.</p>
</li>
</ol>
<p>Once configured, your CI/CD pipeline becomes AI-assisted  capable of analyzing its own failures and suggesting actionable fixes.</p>
<hr />
<h2>💡 Practical Demonstration</h2>
<p>Enough with the theory  lets see this in action.</p>
<h3>Step 1: Fork and Clone the Repository</h3>
<p>First, head over to the GitHub repository and<strong>fork it under your own username</strong>.<br />Youll be intentionally modifying the code later to trigger pipeline failures, so forking is important.</p>
<p>After forking:</p>
<pre><code class="language-bash">git clone https://github.com/your-username/ai-devops-agent.git
cd ai-devops-agent/cicd-copilot/terra-config
</code></pre>
<h3>Step 2: Initialize Terraform</h3>
<p>Inside the<code>terra-config</code>directory, initialize Terraform:</p>
<pre><code class="language-bash">terraform init
</code></pre>
<h3>Step 3: Generate Your Gemini API Key</h3>
<p>To provision the infrastructure, youll need a<strong>GEMINI_API_KEY</strong>.</p>
<ol>
<li><p>Go to<strong>Google AI Studio</strong></p>
</li>
<li><p>Log in with your Google account</p>
</li>
<li><p>Navigate to the<strong>API</strong>section</p>
</li>
<li><p>Click<strong>Create API Key</strong></p>
</li>
<li><p>Give it a name and generate the key</p>
</li>
<li><p>Store it securely</p>
</li>
</ol>
<p>Now, apply the Terraform configuration:</p>
<pre><code class="language-bash">terraform apply -var="gemini_api_key=&lt;Paste-your-key-here&gt;" --auto-approve
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/f4495eaf-3143-461c-9b4e-3df050704e75.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p> Make sure the configured AWS IAM user has the required permissions (Lambda and Secrets Manager access), as mentioned in the prerequisites section.</p>
</blockquote>
<p>Once completed, your infrastructure (Lambda function + IAM roles + Secret) will be up and running.</p>
<h3>Step 4: Configure Jenkins Pipeline</h3>
<p>Open your Jenkins dashboard (usually running on<code>http://localhost:8080</code>).</p>
<ol>
<li><p>Click<strong>Create New Item</strong></p>
</li>
<li><p>Select<strong>Pipeline</strong></p>
</li>
<li><p>Name it:<code>cicd-copilot</code></p>
</li>
<li><p>Choose<strong>Pipeline script from SCM</strong></p>
</li>
</ol>
<p>Configure the following:</p>
<ul>
<li><p><strong>SCM:</strong>Git</p>
</li>
<li><p><strong>Repository URL:</strong><br /><code>https://github.com/your-username/ai-devops-agent</code></p>
</li>
<li><p><strong>Branch Specifier:</strong><code>main</code></p>
</li>
<li><p><strong>Script Path:</strong><br /><code>cicd-copilot/Jenkinsfile</code></p>
</li>
</ul>
<p>Click<strong>Save</strong>.</p>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/83e359f9-b9b4-4df4-9245-f7f883b5c4d2.png" alt="" style="display:block;margin:0 auto" />

<h3>Step 5: Install Required Jenkins Plugins</h3>
<p>Navigate to:</p>
<p><strong>Manage Jenkins  Plugins</strong></p>
<p>Install the following plugins:</p>
<ul>
<li><p>Docker</p>
</li>
<li><p>Docker Pipeline</p>
</li>
<li><p>Docker Commons</p>
</li>
</ul>
<h3>Step 6: Add Docker to Jenkins PATH</h3>
<p>Ensure Docker is accessible inside Jenkins.</p>
<p>In your terminal, run:</p>
<pre><code class="language-bash">which docker
</code></pre>
<p>Copy the output path.</p>
<p>Now go to:</p>
<p><strong>Manage Jenkins  System  Global Properties</strong></p>
<p>Append the copied path to the existing PATH variable using<code>:</code>as a separator. Save the configuration.</p>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/34f11cf9-d41e-4296-9488-543f43358466.png" alt="" style="display:block;margin:0 auto" />

<h3>Step 7: Add Docker Hub Credentials</h3>
<p>Navigate to:</p>
<p><strong>Manage Jenkins  Credentials</strong></p>
<ol>
<li><p>Add a new credential:</p>
<ul>
<li><p>Kind:<strong>Username with password</strong></p>
</li>
<li><p>Username: Your Docker Hub username</p>
</li>
<li><p>Password: Your Docker Hub password</p>
</li>
<li><p>ID:<code>docker-cred</code></p>
</li>
</ul>
</li>
</ol>
<p>Save it.</p>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/b44801c5-a867-40c3-9dfc-f7fa4fc7a0f3.png" alt="" style="display:block;margin:0 auto" />

<h3>Step 8: Trigger the Pipeline</h3>
<p>Now go back to your<code>cicd-copilot</code>project and click<strong>Build Now</strong>.</p>
<p>Open<strong>Console Output</strong>.</p>
<p>You will notice that the pipeline fails  this is intentional.</p>
<p>The logs are automatically captured and sent to the AI Agent, which returns structured debugging analysis inside the Jenkins console.</p>
<p>In the first failure, the AI identifies a typo in the<code>Dockerfile</code>.<br />For example:</p>
<pre><code class="language-plaintext">apine
</code></pre>
<p>It should be:</p>
<pre><code class="language-plaintext">alpine
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/3213486c-34f1-43c2-bea6-881bc5e3c8d1.png" alt="" style="display:block;margin:0 auto" />

<p>Fix the typo in your forked repository and commit the changes.</p>
<h3>Step 9: Second Failure (Version Mismatch)</h3>
<p>Rebuild the pipeline.</p>
<p>This time, the pipeline fails again  but for a different reason. There is a Docker image version mismatch.</p>
<p>The AI analysis might suggest that the image is private or unavailable. However, the real issue is in the<code>Jenkinsfile</code>.</p>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/2dab0fcc-d940-47bc-82a7-4e935c513880.png" alt="" style="display:block;margin:0 auto" />

<p>Inside the<strong>Run Container</strong>stage, change the image version from:</p>
<pre><code class="language-plaintext">v2
</code></pre>
<p>to:</p>
<pre><code class="language-plaintext">v1
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/a23055c8-f0d0-4171-8e1b-e60e46277646.png" alt="" style="display:block;margin:0 auto" />

<p>Commit the change and rebuild the pipeline.</p>
<h3>Step 10: Successful Pipeline Run</h3>
<p>Now, when you trigger the pipeline again:</p>
<ul>
<li><p>The build succeeds</p>
</li>
<li><p>The Docker image is pushed to your Docker Hub account</p>
</li>
<li><p>The container starts successfully</p>
</li>
</ul>
<p>Visit:</p>
<pre><code class="language-plaintext">http://localhost:3000
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/32f1dc9e-8011-42fc-8668-2ef0f42e29a6.png" alt="" style="display:block;margin:0 auto" />

<p>You should see the Book Reader application running.</p>
<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/3e3fb27b-3fa0-441c-a89c-0b485542e1e9.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/64670bca67317a0d1d8a20ce/37cb543f-2701-42e3-96ce-c2448164f62e.png" alt="" style="display:block;margin:0 auto" />

<h3>Stop the Application</h3>
<p>To stop the running container:</p>
<pre><code class="language-bash">docker kill cicd-copilot
</code></pre>
<h3>Clean Up Infrastructure</h3>
<p>To avoid unnecessary AWS charges, destroy the infrastructure:</p>
<pre><code class="language-bash">terraform destroy -var="gemini_api_key=&lt;Paste-your-key-here&gt;" --auto-approve
</code></pre>
<h3>What We Achieved</h3>
<p>In this project, we built an AI-powered CI/CD Copilot using:</p>
<ul>
<li><p>Jenkins for pipeline orchestration</p>
</li>
<li><p>AWS Lambda for AI agent logic</p>
</li>
<li><p>AWS Secrets Manager for secure API storage</p>
</li>
<li><p>Google Gemini API for log analysis</p>
</li>
</ul>
<p>The agent receives contextual pipeline information and failure logs, analyzes them intelligently, and provides structured debugging insights directly inside the CI/CD workflow.</p>
<p>Instead of manually scanning logs, you now have an AI assistant that understands context, categorizes failures, identifies root causes, and suggests actionable fixes  making debugging faster, smarter, and more efficient.</p>
<hr />
<h2>💡 Conclusion</h2>
<p>Modern CI/CD pipelines are powerful  but when they fail, debugging can quickly become time-consuming and frustrating. In this project, we went a step further by integrating AI directly into the pipeline workflow.</p>
<p>By combining:</p>
<ul>
<li><p><strong>Jenkins</strong>for orchestration</p>
</li>
<li><p><strong>AWS Lambda</strong>for serverless execution</p>
</li>
<li><p><strong>AWS Secrets Manager</strong>for secure API handling</p>
</li>
<li><p><strong>Google Gemini API</strong>for intelligent log analysis</p>
</li>
</ul>
<p>we built an AI-powered CI/CD Copilot capable of understanding pipeline context, analyzing failure logs, identifying root causes, and suggesting actionable fixes  all automatically.</p>
<p>This isnt just about log analysis. Its about shifting from reactive debugging to intelligent, context-aware automation.</p>
<p>As AI continues to evolve, integrating agentic systems into DevOps workflows will become increasingly common. Building projects like this not only strengthens your cloud and automation skills but also prepares you for the next wave of AI-driven infrastructure.</p>
<p>If you found this project helpful, feel free to connect with me and follow my work:</p>
<ul>
<li><p>🌐<strong>Website:</strong><a href="https://praveshsudha.com">https://praveshsudha.com</a></p>
</li>
<li><p>📝<strong>Blog:</strong><a href="https://blog.praveshsudha.com">https://blog.praveshsudha.com</a></p>
</li>
<li><p>💼<strong>LinkedIn:</strong><a href="https://www.linkedin.com/in/pravesh-sudha">https://www.linkedin.com/in/pravesh-sudha</a></p>
</li>
<li><p>🐙<strong>GitHub:</strong><a href="https://github.com/Pravesh-Sudha">https://github.com/Pravesh-Sudha</a></p>
</li>
<li><p>🐦<strong>Twitter/X:</strong><a href="https://x.com/praveshstwt">https://x.com/praveshstwt</a></p>
</li>
<li><p><strong>🎥 Youtube</strong>: <a href="https://youtube.com/@pravesh-sudha">https://youtube.com/@pravesh-sudha</a></p>
</li>
</ul>
<p>I regularly share content on DevOps, AWS, Terraform, CI/CD, and building real-world cloud projects from scratch.</p>
<p>If you build your own version of this AI CI/CD Copilot, tag me  Id love to see what you create.</p>
<p>Happy Building 🚀</p>
]]></description><link>https://blog.praveshsudha.com/building-an-ai-powered-ci-cd-copilot-with-jenkins-and-aws-lambda</link><guid isPermaLink="true">https://blog.praveshsudha.com/building-an-ai-powered-ci-cd-copilot-with-jenkins-and-aws-lambda</guid><category><![CDATA[AI]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[gemini]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[🚀 I Built SkillDebt.ai to Understand My Own Skill Gaps]]></title><description><![CDATA[<h2>🌟 Introduction</h2>
<p>Hola amigos 👋<br />Welcome to the world of <strong>AI and DevOps</strong>.</p>
<p>In this blog, I want to share my experience building <a href="https://my-repo-8k7lhiaxb-pravesh-sudhas-projects.vercel.app/"><strong>SkillDebt.ai</strong></a> as part of the <strong>UI Strikes Back Challenge</strong>, hosted by the <strong>WEMakeDevs community</strong> in collaboration with <strong>Tambo AI</strong>.</p>
<p>The idea behind SkillDebt.ai is simple:<br />as developers, we often talk about <em>technical debt</em> in our code  but we rarely think about the <strong>technical debt in our careers</strong>.</p>
<p>SkillDebt.ai takes your <strong>resume or tech stack</strong>, analyzes it using <a href="https://tambo.co/"><strong>Tambo</strong></a> <strong>AI and Gemini</strong>, and turns that data into <strong>beautiful, interactive visual insights</strong> about your skills. Instead of long paragraphs or generic advice, you get a clear picture of where you stand in your field.</p>
<p>Beyond skill visualization, it also highlights:</p>
<ul>
<li><p><strong>Skill decay</strong>  tools and technologies you havent touched in a while</p>
</li>
<li><p><strong>Risk audits</strong>  warning signs when core skills are becoming outdated</p>
</li>
<li><p><strong>Upgrade suggestions</strong>  practical recommendations on what skills to add next to boost your career growth</p>
</li>
</ul>
<p>This project isnt just about AI or UI  its about giving developers clarity, direction, and a better way to plan their learning journey.</p>
<p>%[<a href="https://youtu.be/ItTKixXJF2I%5C%5D">https://youtu.be/ItTKixXJF2I\]</a></p>
<hr />
<h2>🌟 Practical Demo</h2>
<p>Lets see <strong>SkillDebt.ai</strong> in action.</p>
<p>Theres no heavy setup or complex prerequisites. All you need is your <strong>resume in PDF format</strong>.</p>
<p>Head over to the live demo here:<br />👉 <a href="https://my-repo-8k7lhiaxb-pravesh-sudhas-projects.vercel.app/">https://my-repo-8k7lhiaxb-pravesh-sudhas-projects.vercel.app/</a></p>
<blockquote>
<p>The Project has been taken down by 25 Feb 2026, you can follow the GitHub Guide to illustrate in your own local system</p>
</blockquote>
<p>Once youre on the site, click on <strong>Upload Resume</strong>, select your PDF, and hit <strong>Analyze</strong>. Thats it.</p>
<p>From there, SkillDebt.ai walks you through a complete breakdown of your profile:</p>
<p>First, youll see a <strong>visual skill analysis chart</strong> that gives a quick overview of your strengths and gaps across different areas in your field.</p>
<p>Next comes the <strong>Skill Decay graph</strong>, which highlights technologies you havent actively used in a while and flags them based on risk. This part is especially useful because it surfaces skills you might be unknowingly neglecting.</p>
<p>After that, the <strong>Risk Audit</strong> section kicks in. It acts like a warning system, pointing out areas in your resume that could become problematic if left unaddressed.</p>
<p>Finally, you get <strong>career-focused upgrade suggestions</strong>  specific skills you should consider adding or improving to stay relevant and boost long-term growth.</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770317943654/421c586e-f6d4-4361-aebe-c5f2cbea105c.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770317953351/807e7dba-e27d-4067-a683-6a500b631862.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770317957343/d8ad7eac-de79-4a74-8ab6-551c5699be3c.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770317969807/96285c8d-0f31-477f-a448-9094931f5514.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770317974042/a0f61171-788e-4991-a052-04b2ecc34630.png" alt="" style="display:block;margin:0 auto" />

<p>If youre curious about how everything works under the hood, the complete source code is open-source and available here:<br />🔗 <a href="https://github.com/Pravesh-Sudha/ui-strikes-back">https://github.com/Pravesh-Sudha/ui-strikes-back</a></p>
<hr />
<h2>🌟 How I Built It</h2>
<p>Going into the hackathon, I had one clear goal:<br />I didnt want to build something flashy but forgettable. I wanted to build something <strong>novel and actually useful</strong>.</p>
<p>The AI agent space is already crowded. Everywhere you look, theres another code debugger, another productivity hack, another AI assistant doing roughly the same thing. At the same time, with the rapid rise of AI  especially tools like <strong>Claude Code and autonomous agents</strong>  AI engineering has gone through the roof.</p>
<p>Thats when I paused and thought:<br />instead of building yet another tool to <em>replace</em> engineers, why not build something that helps engineers <strong>upskill</strong> and stay ahead of the curve?</p>
<p>That idea became <a href="https://my-repo-8k7lhiaxb-pravesh-sudhas-projects.vercel.app/"><strong>SkillDebt.ai</strong></a>  a system focused on helping developers understand where they stand today, what theyre falling behind on, and how they can adapt to this AI-driven future instead of getting left behind.</p>
<p>From an implementation perspective, the most challenging part for me was configuring the <strong>Tambo Generative UI components</strong>. Getting the components to behave correctly, respond to the data, and render meaningful insights wasnt straightforward at first. I ran into plenty of invalid input errors along the way.</p>
<p>But once I understood how the pieces fit together, things started clicking. The <strong>documentation played a huge role</strong> here  it turned what initially felt overwhelming into a structured learning process. After a lot of trial and error (mostly <strong>Invalid config for components</strong>) , I finally got all <strong>four core components</strong> working together smoothly.</p>
<p>The main heart of the Project is the <code>tambo.config.ts</code> file inside the <code>src/tambo</code> directory, it handles the prompt for the generative UI components. Have a look at it:</p>
<pre><code class="language-python">import { z } from 'zod';
import { SkillRadarChart } from '../components/adaptive/SkillRadarChart';
import { SkillDecayTimeline } from '../components/adaptive/SkillDecayTimeline';
import { RiskWarningCard } from '../components/adaptive/RiskWarningCard';
import { UpgradeSuggestionCard } from '../components/adaptive/UpgradeSuggestionCard';
import { ExplanationToggle } from '../components/adaptive/ExplanationToggle';

export const tamboConfig = {
    components: [
        {
            name: 'skill_radar_chart',
            description: 'Visualizes the balance between depth and breadth of skills, or compares multiple skill categories.',
            component: SkillRadarChart,
            propsSchema: z.object({
                title: z.string().describe("Title of the chart, e.g., 'Frontend Skill Balance'").default("Skill Analysis"),
                data: z.array(z.object({
                    skill: z.string().describe("Name of the skill, e.g., 'React'").default("Unknown Skill"),
                    value: z.number().min(0).max(100).describe("Skill level from 0 to 100").default(50),
                    fullMark: z.number().default(100).optional(),
                })).describe("Array of 3-6 skills to visualize.").default([]),
            }),
        },
        {
            name: 'skill_decay_timeline',
            description: 'Shows a timeline of skills and their freshness/decay status based on last usage.',
            component: SkillDecayTimeline,
            propsSchema: z.object({
                data: z.array(z.object({
                    name: z.string().describe("Name of the skill, e.g. 'jQuery'").default("Unknown Skill"),
                    lastUsed: z.string().describe("Year or timeframe like '2023', 'Current'").default("Unknown"),
                    decayLevel: z.string().describe("Decay level: 'low', 'medium', 'high', 'critical'").default("medium"),
                })).describe("List of data points regarding skill usage and decay for the timeline.").default([]),
            }),
        },
        {
            name: 'risk_warning_card',
            description: 'Displays a warning about a specific career risk or skill obsolescence.',
            component: RiskWarningCard,
            propsSchema: z.object({
                title: z.string().describe("Short warning title, e.g. 'Legacy Stack Risk'").default("Risk Warning"),
                message: z.string().describe("Detailed explanation of the risk").default("Potential risk detected."),
                riskLevel: z.string().describe("Risk level: 'moderate', 'high', 'critical'").default('moderate'),
            }),
        },
        {
            name: 'upgrade_suggestion_card',
            description: 'Suggests a specific skill upgrade or learning path with potential impact.',
            component: UpgradeSuggestionCard,
            propsSchema: z.object({
                skill: z.string().describe("The recommended skill to learn").default("New Skill"),
                recommendation: z.string().describe("Why this skill is recommended").default("Recommended for career growth."),
                impact: z.string().describe("Impact: 'career_pivot', 'salary_bump', 'stability'").default("stability"),
            }),
        },
        {
            name: 'explanation_toggle',
            description: 'Can be used to provide deeper context or reasoning for a specific insight, hidden by default behind a toggle.',
            component: ExplanationToggle,
            propsSchema: z.object({
                reasoning: z.string().describe("The detailed reasoning or explanation to be hidden.").default("No additional details provided."),
                context: z.string().describe("Optional context or source data reference.").optional(),
            }),
        },
    ],
};
</code></pre>
<p>It wasnt easy at the start, but that struggle is exactly what made the project so rewarding.</p>
<p>After deploying the project, I posted about it on <a href="https://www.linkedin.com/posts/pravesh-sudha_ai-aiagents-theuistrikesback-activity-7425517818843971584-MTpj?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAADlc2qIBCVMfVhYQW8Nw26AxcZeteDQrXRg"><strong>LinkedIn</strong></a> and dozens of Developers got their profile review using the system, and seeing real people interact with Generative UI components using <strong>Tambo</strong> made my <strong>DAY</strong>!</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770455428855/ff54a565-1626-4d1c-931b-33483919db92.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🌟 Conclusion</h2>
<p>Building <strong>SkillDebt.ai</strong> was a genuinely fun and exciting journey. From shaping the idea, struggling through early implementation issues, to finally seeing the generative UI come together  every step pushed me to think differently about how AI can be used to <strong>empower developers</strong>, not replace them.</p>
<p>Huge thanks to the <strong>WEMakeDevs community</strong> and <strong>Tambo AI</strong> for organizing the <strong>UI Strikes Back Challenge</strong> and creating a space that encourages experimentation, learning, and building in public. Challenges like these are what make the developer ecosystem so motivating.</p>
<p>If you found this project interesting or have ideas on how it can be improved, Id love to hear from you. You can find the code on GitHub, and feel free to connect with me on my socials:</p>
<ul>
<li><p><strong>GitHub:</strong> <a href="https://github.com/Pravesh-Sudha">https://github.com/Pravesh-Sudha</a></p>
</li>
<li><p><strong>LinkedIn:</strong> <a href="https://www.linkedin.com/in/pravesh-sudha/">https://www.linkedin.com/in/pravesh-sudha/</a></p>
</li>
<li><p><strong>Twitter / X:</strong> <a href="https://x.com/praveshstwt">https://x.com/praveshstwt</a></p>
</li>
<li><p><strong>YouTube:</strong> <a href="https://www.youtube.com/@pravesh-sudha">https://www.youtube.com/@pravesh-sudha</a></p>
</li>
</ul>
<p>Thanks for reading  and as always, keep building, keep learning, and stay curious 🚀</p>
<p>Adios 👋</p>
]]></description><link>https://blog.praveshsudha.com/i-built-skilldebtai-to-understand-my-own-skill-gaps</link><guid isPermaLink="true">https://blog.praveshsudha.com/i-built-skilldebtai-to-understand-my-own-skill-gaps</guid><category><![CDATA[tamboi]]></category><category><![CDATA[webdev]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AI]]></category><category><![CDATA[skills]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[How I Built an AI Terraform Review Agent on Serverless AWS]]></title><description><![CDATA[<h2 id="heading-introduction">🌟 Introduction</h2>
<p>Welcome, Devs 👋<br />Today, were stepping into the exciting intersection of <strong>AI, automation, and cloud infrastructure</strong>.</p>
<p>In this project, well explore how an <strong>AI-powered agent can actively participate in a real DevOps workflow</strong>, just like a senior reviewer on your team. This isnt a toy demo  it closely resembles how <strong>real-world infrastructure changes are reviewed, validated, and approved</strong> in production environments.</p>
<p>Well use <strong>Terraform</strong> to provision cloud resources and <strong>GitHub Actions</strong> to automatically validate every pull request that modifies our HCL code. But heres the twist 👀<br />Instead of relying only on static checks, we introduce an <strong>AI agent</strong> into the pipeline.</p>
<p>Every infrastructure change is:</p>
<ul>
<li><p>Scanned using <strong>Terrascan</strong></p>
</li>
<li><p>Reviewed by an <strong>AI agent powered by Gemini</strong></p>
</li>
<li><p>Automatically <strong>approved, approved with changes, or rejected</strong> based on risk severity</p>
</li>
</ul>
<p>If a pull request introduces <strong>dangerous or insecure infrastructure changes</strong>, the AI agent <strong>blocks the PR</strong>  just like an automated infrastructure security reviewer.</p>
<p>Think of it as:</p>
<blockquote>
<p>🧠 An AI-powered Infra Guardian that never gets tired of reviewing Terraform code.</p>
</blockquote>
<p>So without further ado, lets dive in and see how we built an <strong>AI-driven, serverless DevOps workflow</strong> that brings intelligence directly into your CI/CD pipeline.</p>
<hr />
<h2 id="heading-youtube-demonstration">📽 Youtube Demonstration</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/i2XkTZQoS2g">https://youtu.be/i2XkTZQoS2g</a></div>
<p> </p>
<hr />
<h2 id="heading-pre-requisites">🌟 Pre-requisites</h2>
<p>Before we dive deep into the implementation, lets make sure your environment is ready. This project touches multiple tools across cloud, IaC, security, and CI/CD, so having these set up beforehand will save you a lot of time.</p>
<p>Make sure you have the following in place:</p>
<ul>
<li><p><strong>AWS CLI</strong> installed and configured with an IAM user</p>
<blockquote>
<p>The IAM user should have permissions to create resources like ALB, ECS, Lambda, IAM, ACM, etc.</p>
</blockquote>
</li>
<li><p><strong>Terraform CLI</strong> installed on your system</p>
</li>
<li><p><strong>GitHub account</strong> (pretty easy 😉)</p>
</li>
<li><p><strong>Terrascan</strong> installed locally<br />  👉 Follow the official guide here:<br />  <a target="_blank" href="https://runterrascan.io/docs/getting-started/">https://runterrascan.io/docs/getting-started/</a></p>
</li>
</ul>
<p>If youre completely new to <strong>AWS CLI</strong> or <strong>Terraform</strong>, dont worry. Ive already written a beginner-friendly guide that walks you through everything step by step:</p>
<p>📘 <strong>Getting Started with Terraform (Beginners Guide)</strong><br /><a target="_blank" href="https://blog.praveshsudha.com/getting-started-with-terraform-a-beginners-guide#heading-step-1-install-the-aws-cli">https://blog.praveshsudha.com/getting-started-with-terraform-a-beginners-guide#heading-step-1-install-the-aws-cli</a></p>
<p>Once these prerequisites are fulfilled, youre all set 🚀</p>
<hr />
<h2 id="heading-why-ai-agents-in-modern-devops">🌟 Why AI Agents in Modern DevOps?</h2>
<p>The current DevOps landscape is heavily influenced by <strong>AI-driven automation</strong>. What we now call <strong>AIOps</strong> has quietly become the de-facto standard for deploying, monitoring, and delivering software at scale.</p>
<p>AI agents are everywhere today  but lets address the elephant in the room.</p>
<p>An <strong>AI agent</strong> is essentially a program that automates work which previously required human intervention. In many cases, it still follows a <strong>human-in-the-loop</strong> approach, but the heavy lifting  analysis, validation, and decision-making  is handled by the agent itself.</p>
<p>In this project, well bring that concept to life.</p>
<p>Well deploy a <strong>Super Mario Bros game</strong> (containerized using Docker) on a <strong>serverless AWS architecture</strong>, leveraging services like:</p>
<ul>
<li><p><strong>Amazon ECS</strong></p>
</li>
<li><p><strong>AWS Lambda</strong></p>
</li>
<li><p><strong>Application Load Balancer (ALB)</strong></p>
</li>
<li><p><strong>ACM for HTTPS</strong></p>
</li>
<li><p><strong>GitHub Actions for CI/CD</strong></p>
</li>
</ul>
<p>This setup closely resembles a <strong>real-world production environment</strong>.</p>
<p>Now comes the interesting part 👀</p>
<p>Every time a <strong>Pull Request</strong> is raised against our Terraform codebase:</p>
<ul>
<li><p><strong>GitHub Actions</strong> kicks in</p>
</li>
<li><p><strong>Terrascan</strong> scans our IaC for security and best-practice violations</p>
</li>
<li><p>The scan report is sent to an <strong>AI agent powered by Gemini</strong></p>
</li>
<li><p>The AI analyzes the findings and decides whether to:</p>
<ul>
<li><p> <strong>Approve</strong></p>
</li>
<li><p> <strong>Approve with Changes</strong></p>
</li>
<li><p> <strong>Reject</strong> the PR</p>
</li>
</ul>
</li>
</ul>
<p>In a real-world DevOps workflow, this kind of system can <strong>save hours of manual review</strong>, reduce human error, and provide <strong>actionable remediation suggestions</strong> along with architectural risk insights.</p>
<p>Think of it as an <strong>automated Infrastructure Reviewer</strong>  one that never gets tired and scales with your team.</p>
<hr />
<h2 id="heading-practical-demonstration-building-the-ai-powered-devops-workflow">🌟 Practical Demonstration: Building the AI-Powered DevOps Workflow</h2>
<p>Enough theory  lets get our hands dirty and see this system in action.</p>
<p>To get started, head over to the following GitHub repository, <strong>fork it under your own GitHub username</strong>, and then clone it locally:</p>
<p>👉 <strong>Repository:</strong> <a target="_blank" href="https://github.com/Pravesh-Sudha/ai-devops-agent">https://github.com/Pravesh-Sudha/ai-devops-agent</a></p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/&lt;your-username&gt;/ai-devops-agent.git
<span class="hljs-built_in">cd</span> ai-devops-agent
</code></pre>
<p>Now navigate into the main project directory:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> terraform-review-agent
</code></pre>
<p>Open the project in <strong>VS Code</strong> (or your favorite editor). Youll notice two main subdirectories:</p>
<pre><code class="lang-bash">terraform-review-agent/
 lambda/
 terraform/
</code></pre>
<ul>
<li><p><code>lambda/</code>  Contains the AI review Lambda function</p>
</li>
<li><p><code>terraform/</code>  Contains all infrastructure provisioning code</p>
</li>
</ul>
<p>Lets walk through the Terraform configuration piece by piece.</p>
<h2 id="heading-terraform-code-breakdown">🧩 Terraform Code Breakdown</h2>
<h3 id="heading-providertf">🔹 <code>provider.tf</code></h3>
<p>Defines AWS as the cloud provider:</p>
<ul>
<li><p>AWS provider version: <strong>6.26.0</strong></p>
</li>
<li><p>Region: <strong>us-east-1</strong></p>
</li>
</ul>
<p>This ensures consistent provider behavior across environments.</p>
<h3 id="heading-backendtf">🔹 <code>backend.tf</code></h3>
<p>We store Terraform state remotely using <strong>Amazon S3</strong>  a production best practice.</p>
<pre><code class="lang-bash">use_lockfile = <span class="hljs-literal">true</span>
</code></pre>
<p>This enables <strong>state locking without DynamoDB</strong>, preventing concurrent state corruption using Terraforms native lockfile mechanism.</p>
<h3 id="heading-variablestf">🔹 <code>variables.tf</code></h3>
<p>Only two variables are required:</p>
<ul>
<li><p><code>project_name</code>  fixed as <strong>mario-game</strong></p>
</li>
<li><p><code>gemini_api_key</code>  passed dynamically (never hardcoded)</p>
</li>
</ul>
<p>This ensures our API key remains secure and out of version control.</p>
<h3 id="heading-outputstf">🔹 <code>outputs.tf</code></h3>
<p>Provides useful runtime information after provisioning:</p>
<ul>
<li><p>ALB DNS name (where the game runs)</p>
</li>
<li><p>ACM certificate ARN (used later for HTTPS)</p>
</li>
</ul>
<h3 id="heading-networkingtf">🔹 <code>networking.tf</code></h3>
<p>Instead of using the default VPC, we create our <strong>own VPC</strong> using the official AWS VPC module:</p>
<ul>
<li><p>Two <strong>public subnets</strong></p>
</li>
<li><p>Clean network isolation</p>
</li>
<li><p>Better control and scalability</p>
</li>
</ul>
<h3 id="heading-securitytf">🔹 <code>security.tf</code></h3>
<p>Security is handled via two separate security groups:</p>
<ul>
<li><p><strong>ALB Security Group</strong></p>
<ul>
<li>Allows inbound traffic from anywhere (port 80 initially)</li>
</ul>
</li>
<li><p><strong>ECS Task Security Group</strong></p>
<ul>
<li>Only allows traffic from the ALB</li>
</ul>
</li>
</ul>
<p>This follows the <strong>least privilege principle</strong>.<br />(We later extend this to support HTTPS on port 443.)</p>
<h3 id="heading-secretstf">🔹 <code>secrets.tf</code></h3>
<p>The Gemini API key is securely stored using <strong>AWS Secrets Manager</strong>.</p>
<p>No plaintext secrets. No leaks. Production-safe by default.</p>
<h2 id="heading-the-ai-brain-lambda-function">🧠 The AI Brain: Lambda Function</h2>
<h3 id="heading-lambdatf">🔹 <code>lambda.tf</code></h3>
<p>This file defines a Python-based <strong>AWS Lambda function</strong> responsible for reviewing Terrascan findings and acting as a <strong>CI/CD security gate</strong>.</p>
<p>At the heart of this Lambda is a carefully crafted prompt:</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">build_prompt</span>(<span class="hljs-params">findings: dict</span>) -&gt; str:</span>
    <span class="hljs-keyword">return</span> <span class="hljs-string">f"""
You are a senior DevOps and Terraform security reviewer acting as a CI/CD security gate.

Your task is to analyze Terrascan findings and decide whether the infrastructure
can be deployed based on **risk thresholds**, not perfection.

Decision Policy (STRICT)
- REJECT if:
  - Any HIGH or CRITICAL severity issue exists
  - OR MEDIUM severity issues  4
  - OR Application Load Balancer has **no HTTPS listener at all**
- APPROVE_WITH_CHANGES if:
  - MEDIUM severity issues are 13
- APPROVE if:
  - Only LOW or INFO issues exist

Output Format
Provide:
1. 🚨 Security issues ordered by severity (summary only)
2. 🛠 Required remediation (only actionable items)
3.  Risk justification (12 lines)
4. 📌 Final verdict: APPROVE | APPROVE_WITH_CHANGES | REJECT

Rules:
- Be concise
- Use bullet points
- Focus on AWS (ALB, ECS, VPC, IAM)
- Ignore Terrascan scan_errors
- Do NOT repeat raw JSON
- Verdict must strictly follow the Decision Policy

Findings:
<span class="hljs-subst">{json.dumps(findings, indent=<span class="hljs-number">2</span>)}</span>
"""</span>
</code></pre>
<p>This logic ensures:</p>
<ul>
<li><p><strong>Security is enforced pragmatically</strong></p>
</li>
<li><p>No false rejections for minor issues</p>
</li>
<li><p>HTTPS is mandatory for approval</p>
</li>
<li><p>Clear, actionable feedback for developers</p>
</li>
</ul>
<h3 id="heading-iamtf">🔹 <code>iam.tf</code></h3>
<p>IAM roles and policies are defined here:</p>
<ul>
<li><p>Lambda is granted access to <strong>Secrets Manager</strong></p>
</li>
<li><p>ECS task role attaches:</p>
<ul>
<li><code>AmazonECSTaskExecutionRolePolicy</code></li>
</ul>
</li>
</ul>
<p>This allows ECS to pull images, write logs, and function correctly.</p>
<h3 id="heading-ecstf">🔹 <code>ecs.tf</code></h3>
<p>This is where the <strong>Mario game comes to life</strong>:</p>
<ul>
<li><p>ECS task definition using Fargate</p>
</li>
<li><p>Docker image for Super Mario Bros</p>
</li>
<li><p>ECS service to keep the task running</p>
</li>
</ul>
<p>Fully serverless. No EC2 management required.</p>
<h3 id="heading-albtf">🔹 <code>alb.tf</code></h3>
<p>To expose the application publicly:</p>
<ul>
<li><p>Application Load Balancer</p>
</li>
<li><p>Listener on port <strong>80</strong> (initially)</p>
</li>
<li><p>Target group pointing to ECS tasks</p>
</li>
</ul>
<p>Later, we enhance this with <strong>HTTPS + ACM</strong>, making the setup production-ready.</p>
<h2 id="heading-provisioning-the-infrastructure">🚀 Provisioning the Infrastructure</h2>
<p>Before running Terraform, we need to create the S3 bucket for state storage:</p>
<pre><code class="lang-bash">aws s3 mb s3://pravesh-terraform-mario-state
</code></pre>
<p> If you see <code>BucketAlreadyExists</code>, simply:</p>
<ul>
<li><p>Update the bucket name in <code>backend.tf</code></p>
</li>
<li><p>Re-run the command with a unique name</p>
</li>
</ul>
<p>Now initialize Terraform:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> terraform
terraform init
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767886182983/28a9c1e3-5681-4578-bcd2-51fd347d37f4.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-gemini-api-key-setup">Gemini API Key Setup</h2>
<p>Head over to <strong>Google AI Studio</strong> and generate a free Gemini API key.</p>
<p>Once you have it, keep it safe  well pass it dynamically to Terraform.</p>
<h2 id="heading-plan-amp-apply">Plan &amp; Apply</h2>
<p>Preview the infrastructure:</p>
<pre><code class="lang-bash">terraform plan -var=<span class="hljs-string">"gemini_api_key=&lt;YOUR_GEMINI_API_KEY&gt;"</span>
</code></pre>
<p>Review the plan and then deploy:</p>
<pre><code class="lang-bash">terraform apply -var=<span class="hljs-string">"gemini_api_key=&lt;YOUR_GEMINI_API_KEY&gt;"</span> -auto-approve
</code></pre>
<p> Provisioning takes around <strong>57 minutes</strong>, mainly due to ALB setup.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767886422734/b291d128-2d20-44f8-a11c-2f89ad308c9b.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-final-result">🎮 Final Result</h2>
<p>Once Terraform finishes:</p>
<ul>
<li><p>Copy the <strong>ALB DNS name</strong> from the outputs</p>
</li>
<li><p>Open it in your browser</p>
</li>
</ul>
<p>🎉 You should now see the <strong>Super Mario Bros game running on ECS</strong>, backed by a serverless AWS architecture and guarded by an AI-powered DevOps review system.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767886395212/3e842102-ad5a-465e-a49f-8bfa80515cba.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-terraform-ai-review-agent-in-action">🌟 Terraform AI Review Agent in Action</h2>
<p>Now comes the most exciting part  <strong>seeing the Terraform AI review agent in action</strong>.</p>
<p>Lets simulate a real-world scenario by making a small change to our infrastructure code and opening a <strong>Pull Request</strong>. As soon as we do this, our <strong>GitHub Actions workflow</strong> will automatically kick in and run the AI-based review.</p>
<p>Before that, you need to add your AWS Access key and Secret Access key in your secrets of the repo. If you dont know how to do that, <a target="_blank" href="https://blog.praveshsudha.com/cicd-for-terraform-with-github-actions-deploying-a-nodejs-redis-app-on-aws#heading-step-1-add-aws-secrets">follow this guide</a> and do the step 1 only, make sure you select the <strong>ai-devops-projects</strong> repo, not the <strong>nginx-redis-node.</strong></p>
<h3 id="heading-triggering-the-ai-review">Triggering the AI Review</h3>
<p>Make a minor change in the Terraform code and raise a Pull Request. Once the pipeline runs, youll notice that the <strong>workflow fails</strong> .</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767886757413/5e091039-ca2b-41c6-af67-c0ac43413664.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767886855974/cfa7bcd5-f589-4fe3-9f98-3a0140f0a171.png" alt class="image--center mx-auto" /></p>
<p>Why did this happen?</p>
<p>If you check the <strong>Violation report</strong>, youll see that the AI agent rejected the changes. The reason is simple and important:</p>
<ul>
<li><p><strong>Three MEDIUM-severity issues are related to the Application Load Balancer</strong></p>
</li>
<li><p>Our application is currently running only on <strong>HTTP</strong></p>
</li>
<li><p>Running production workloads over HTTP is <strong>not secure</strong></p>
</li>
</ul>
<p>Because our AI agent follows a strict policy (defined in the Lambda prompt), the absence of an <strong>HTTPS listener</strong> on the ALB results in a <strong>PR rejection</strong>.</p>
<p>This is exactly how a real-world AI-powered infrastructure gate should behave.</p>
<h2 id="heading-fixing-the-issue-enabling-https">Fixing the Issue: Enabling HTTPS 🔒</h2>
<p>To resolve this, well enable <strong>HTTPS</strong> by creating an <strong>ACM certificate</strong> and updating our ALB configuration.</p>
<h3 id="heading-step-1-update-security-group-rules">Step 1: Update Security Group Rules</h3>
<p>Inside <code>security.tf</code>, uncomment the <strong>ingress rule for port 443</strong> so that HTTPS traffic is allowed.</p>
<h3 id="heading-step-2-enable-https-listener-on-alb">Step 2: Enable HTTPS Listener on ALB</h3>
<p>Open <code>alb.tf</code> and do the following:</p>
<ul>
<li><p>Uncomment the <code>aws_lb_listener "https"</code> block</p>
</li>
<li><p>Uncomment the ACM certificate resource</p>
</li>
<li><p>Remove the the existing <code>app_listener</code> (HTTP listener)</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768123092689/2bdc6d47-c228-433c-a056-df06e9f52d55.png" alt class="image--center mx-auto" /></p>
<p>This ensures HTTP is no longer used for forwarding traffic directly.</p>
<h3 id="heading-step-3-update-domain-name-in-acm-certificate">Step 3: Update Domain Name in ACM Certificate</h3>
<p>Inside the ACM certificate resource:</p>
<ul>
<li><p>Replace <code>praveshsudha.com</code> with <strong>your own domain name</strong></p>
</li>
<li><p>This is required because youll be adding <strong>CAA and CNAME records</strong> for certificate validation</p>
</li>
</ul>
<h3 id="heading-step-4-add-caa-record-important">Step 4: Add CAA Record (IMPORTANT )</h3>
<p>Before creating the ACM certificate, make sure to add the following <strong>CAA record</strong> in your DNS provider:</p>
<ul>
<li><p><strong>Type:</strong> CAA</p>
</li>
<li><p><strong>Name:</strong> <code>@</code></p>
</li>
<li><p><strong>Flag:</strong> <code>0</code></p>
</li>
<li><p><strong>Tag:</strong> <code>issue</code></p>
</li>
<li><p><strong>CA Domain:</strong> <code>amazonaws.com</code></p>
</li>
<li><p><strong>TTL:</strong> Default</p>
</li>
</ul>
<blockquote>
<p> <strong>Important:</strong> Add this CAA record <em>before</em> applying Terraform, otherwise ACM certificate creation may fail.</p>
</blockquote>
<h3 id="heading-step-5-enable-acm-output">Step 5: Enable ACM Output</h3>
<p>In <code>outputs.tf</code>, uncomment the output block for <code>acm_certificate_arn</code>.<br />This will help us fetch validation details later.</p>
<h3 id="heading-step-6-apply-the-changes">Step 6: Apply the Changes</h3>
<p>Run the following command:</p>
<pre><code class="lang-bash">terraform apply --var=<span class="hljs-string">"gemini_api_key=&lt;YOUR_GEMINI_KEY&gt;"</span> --auto-approve
</code></pre>
<p>This will:</p>
<ul>
<li><p>Create the ACM certificate</p>
</li>
<li><p>Add an HTTPS listener to the ALB</p>
</li>
</ul>
<p>Once completed, Terraform will output the <strong>ACM certificate ARN</strong>.</p>
<h3 id="heading-step-7-validate-the-acm-certificate">Step 7: Validate the ACM Certificate</h3>
<p>Use the ARN and run:</p>
<pre><code class="lang-bash">aws acm describe-certificate \
  --certificate-arn arn:aws:acm:us-east-1:&lt;ACCOUNT_ID&gt;:certificate/&lt;CERT_ID&gt;
</code></pre>
<p>From the output:</p>
<ul>
<li><p>Copy the <strong>CNAME name</strong> (only up to <code>mario</code>, not the full domain)</p>
</li>
<li><p>Copy the <strong>CNAME value</strong></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767886981866/0788353a-e9b8-4244-a88c-532a0dd5584c.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767887046988/c9e23a98-9b0f-4eac-9153-6c19f68948e8.png" alt class="image--center mx-auto" /></p>
<p>Add this CNAME record to your DNS provider.</p>
<p>Within a few minutes, the certificate status will change to <strong>ISSUED</strong> .</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767887057459/0510ae34-ef88-4428-9657-70b5a3a966bd.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-8-point-your-domain-to-the-alb">Step 8: Point Your Domain to the ALB</h3>
<p>Now create a DNS record:</p>
<ul>
<li><p><strong>Type:</strong> CNAME</p>
</li>
<li><p><strong>Name:</strong> <code>mario</code></p>
</li>
<li><p><strong>Target:</strong> <code>&lt;YOUR_ALB_DNS_NAME&gt;</code></p>
</li>
<li><p><strong>TTL:</strong> Default</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767887176099/c13e0e0d-78d5-4ab7-8157-92f6ef7adef9.png" alt class="image--center mx-auto" /></p>
<p>After a few minutes, your application will be live at:</p>
<p>👉 <a target="_blank" href="https://mario.your-domain.com/"><strong>https://mario.your-domain.com</strong></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767887278525/a126a15a-03d9-42b0-9b12-63774b55a4cf.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-re-running-the-ai-review">Re-running the AI Review </h2>
<p>Now that HTTPS is enabled, lets test the AI agent again.</p>
<p>Run the following commands:</p>
<pre><code class="lang-bash">git checkout -b <span class="hljs-built_in">test</span>
git add outputs.tf security.tf alb.tf
git commit -m <span class="hljs-string">"testing ai-agent-workflow"</span>
git push origin <span class="hljs-built_in">test</span>
</code></pre>
<p>Go to your GitHub repository and open a <strong>Pull Request</strong>.</p>
<p>This time:</p>
<ul>
<li><p>GitHub Actions runs successfully</p>
</li>
<li><p>Terrascan reports are generated</p>
</li>
<li><p>Gemini analyzes the findings</p>
</li>
<li><p> <strong>AI agent APPROVES the PR</strong></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767887327890/b81be227-0f5e-4970-8133-f3379f28b2d4.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-cleaning-up-resources">🌟 Cleaning Up Resources</h2>
<p>Once youre done experimenting with the project, its <strong>very important</strong> to clean up all the resources to avoid any unnecessary AWS charges.</p>
<p>Follow the steps below <strong>in order</strong> to safely delete everything we created.</p>
<h3 id="heading-step-1-destroy-terraform-resources">Step 1: Destroy Terraform Resources</h3>
<p>First, navigate to the <code>terraform</code> directory and run:</p>
<pre><code class="lang-bash">terraform destroy --auto-approve --var=<span class="hljs-string">"gemini_api_key=&lt;YOUR_GEMINI_KEY&gt;"</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767887511725/722b0903-355e-41de-ba2d-502459770dba.png" alt class="image--center mx-auto" /></p>
<p>This command will:</p>
<ul>
<li><p>Terminate ECS services and tasks</p>
</li>
<li><p>Delete the Application Load Balancer</p>
</li>
<li><p>Remove Lambda functions and IAM roles</p>
</li>
<li><p>Clean up networking components like VPCs, subnets, and security groups</p>
</li>
</ul>
<h3 id="heading-step-2-delete-the-terraform-state-files-from-s3">Step 2: Delete the Terraform State Files from S3</h3>
<p>Once Terraform has destroyed all the resources, delete the remote state files stored in S3.</p>
<pre><code class="lang-bash">aws s3 rm s3://pravesh-terraform-mario-state --recursive
</code></pre>
<p>This removes all objects inside the bucket, including the Terraform state file.</p>
<h3 id="heading-step-3-remove-the-s3-bucket">Step 3: Remove the S3 Bucket</h3>
<p>Finally, delete the empty S3 bucket:</p>
<pre><code class="lang-bash">aws s3 rb s3://pravesh-terraform-mario-state
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767887535051/0532c7f3-f45f-45c7-9749-b2f85c126f6a.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-conclusion">🌟 Conclusion</h2>
<p>This project goes far beyond deploying a Super Mario game on AWS  it represents how <strong>modern DevOps is evolving with AI and serverless architectures</strong>.</p>
<p>By integrating <strong>Terraform</strong>, <strong>GitHub Actions</strong>, <strong>Terrascan</strong>, and <strong>Gemini</strong>, we built an <strong>AI-powered Terraform review agent</strong> that acts as a real CI/CD security gate. Every infrastructure change is evaluated based on risk, not guesswork. The AI summarizes security findings, suggests concrete remediations, and makes approval decisions that closely resemble how a senior DevOps engineer would review production infrastructure.</p>
<p>On the infrastructure side, we embraced a <strong>serverless-first approach</strong> using <strong>AWS ECS Fargate, Lambda, ALB, and managed cloud services</strong>. This setup reflects real-world architectures used in production today  scalable, cost-efficient, and operationally simple, without managing servers manually.</p>
<p>The key takeaway from this project is clear:<br /><strong>AI in DevOps is not about replacing engineers  its about empowering them.</strong><br />By automating repetitive infrastructure reviews, we save valuable engineering hours, reduce human errors, and ship changes with higher confidence and security.</p>
<p>I highly encourage you to fork the repository, experiment with breaking changes, tune the AI decision thresholds, and extend this project further. This is just the beginning of what AI-assisted DevOps can achieve.</p>
<p>Happy building 🚀</p>
<h3 id="heading-connect-with-me">🔗 Connect with me</h3>
<ul>
<li><p><strong>LinkedIn:</strong> <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha/">https://www.linkedin.com/in/pravesh-sudha/</a></p>
</li>
<li><p><strong>Twitter / X:</strong> <a target="_blank" href="https://x.com/praveshstwt">https://x.com/praveshstwt</a></p>
</li>
<li><p><strong>YouTube:</strong> <a target="_blank" href="https://www.youtube.com/@pravesh-sudha">https://www.youtube.com/@pravesh-sudha</a></p>
</li>
<li><p><strong>Blog:</strong> <a target="_blank" href="https://blog.praveshsudha.com/">https://blog.praveshsudha.com</a></p>
</li>
</ul>
<p>If this project helped you learn something new, feel free to share it with your network  it truly helps a lot </p>
]]></description><link>https://blog.praveshsudha.com/how-i-built-an-ai-terraform-review-agent-on-serverless-aws</link><guid isPermaLink="true">https://blog.praveshsudha.com/how-i-built-an-ai-terraform-review-agent-on-serverless-aws</guid><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[🚀 How I Created an AI-Powered Secret Santa Using Cognee as the Memory Layer]]></title><description><![CDATA[<h2 id="heading-welcome-devs-another-fun-build-with-cognee-ai">Welcome Devs 👋  Another Fun Build with Cognee + AI</h2>
<p>Welcome Devs to another interesting blog from my side!<br />Its been a while since I first connected with <strong>Cognee</strong>, and exactly a month ago I actually built a <strong>Cognee Starter application from scratch using Flask</strong> and deployed it on <strong>AWS ECS using Terraform</strong>. If you havent checked it out yet, heres the link to that build  youll enjoy it:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/uvkwXSUJ6Hw">https://youtu.be/uvkwXSUJ6Hw</a></div>
<p> </p>
<p>Since then, the Cognee team has been on fire. Their GitHub repo recently crossed <strong>10K+ stars</strong> (absolutely deserved 🎉). And staying true to the momentum, they came up with a fun little community event  the <strong>Secret Santa Mini Challenge</strong>.</p>
<p>So for this challenge, I decided to build something a bit unique <br /> <strong>An Emotion-Aware Secret Santa powered by Gemini 2.5 Flash</strong>, with <strong>Cognee</strong> acting as the memory layer holding everything together.</p>
<hr />
<h2 id="heading-how-the-idea-hit-me-and-why-emotions-matter-in-secret-santa">How the Idea Hit Me 🤯  And Why Emotions Matter in Secret Santa</h2>
<p>After going through the rules and criteria of the challenge, I started brainstorming ideas and suddenly something clicked on a <em>very personal</em> level.</p>
<p>In my friend group, <strong>Im the delightful one</strong> <br />Happy for no absolute reason, just vibing, giggling, randomly remembering something from Kevin Hart Special 😂</p>
<p>But my friends?<br />Total opposite personalities:</p>
<ul>
<li><p>One is <strong>stressed 24/7</strong> because of career pressure</p>
</li>
<li><p>Another is <strong>moody</strong>, unpredictable like Mumbai weather</p>
</li>
<li><p>And the last one is the <strong>chill guy</strong>, relaxed in literally every situation</p>
</li>
</ul>
<p>Reflecting on that, I thought:<br /><strong>Why not create a Secret Santa that understands emotions the same way we understand each other?</strong></p>
<p>A Secret Santa that:</p>
<ul>
<li><p>Reads how each friend is feeling</p>
</li>
<li><p>Understands their energy, mood, and stress</p>
</li>
<li><p>Pairs them up based on emotional compatibility</p>
</li>
<li><p>And even helps choose a meaningful gift</p>
</li>
</ul>
<p>Thats how <em>Emotion-Aware Secret Santa</em> was born.</p>
<hr />
<h2 id="heading-how-it-works-turning-feelings-into-smart-gift-matches">How It Works 🧠🎁  Turning Feelings Into Smart Gift Matches</h2>
<p>Each friend gives:</p>
<ul>
<li><p><strong>Their name</strong>, and</p>
</li>
<li><p><strong>A short description of their mood, week, stress level, or personality</strong></p>
</li>
</ul>
<p>For example:</p>
<ul>
<li><p>Alice is overwhelmed with work and feeling stressed.</p>
</li>
<li><p>Bob had a great week and is feeling positive and energetic.</p>
</li>
</ul>
<p>These tiny descriptions become the <em>foundation</em> for the AIs reasoning.</p>
<h3 id="heading-step-1-storing-the-emotional-descriptions-with-cognee">🧩 Step 1  Storing the emotional descriptions with Cognee</h3>
<p>Each user description is added into Cognees memory layer using:</p>
<pre><code class="lang-python">cognify.add(...)
</code></pre>
<p>Then using:</p>
<pre><code class="lang-python">cognify()
</code></pre>
<p>Cognee processes all the data with <strong>Gemini</strong>, building:</p>
<ul>
<li><p>Semantic links</p>
</li>
<li><p>Entities</p>
</li>
<li><p>Relationships</p>
</li>
<li><p>A mini knowledge graph</p>
</li>
<li><p>Embeddings</p>
</li>
</ul>
<p>(Ive shown this visually in my previous video  its super cool to watch.)</p>
<h2 id="heading-step-2-cognee-asks-the-right-question">🧠 Step 2  Cognee asks the right question</h2>
<p>Cognee then asks:</p>
<blockquote>
<p><strong>What is the emotional state or mood of Alice?</strong></p>
</blockquote>
<p>Using <code>RAG_COMPLETION</code>, Gemini returns refined emotional states like:</p>
<ul>
<li><p>stressed</p>
</li>
<li><p>excited</p>
</li>
<li><p>lonely</p>
</li>
<li><p>happy</p>
</li>
<li><p>tired</p>
</li>
</ul>
<h2 id="heading-step-3-ai-powered-secret-santa-pairing">🎅 Step 3  AI-Powered Secret Santa Pairing</h2>
<p>Now the fun logic:</p>
<ul>
<li><p>Cognee assigns Secret Santa pairs</p>
</li>
<li><p>Makes sure no one gets themselves</p>
</li>
<li><p>And suggests a gift based on emotion</p>
</li>
</ul>
<p>Gift suggestions are generated using a <strong>local gift dictionary</strong> (0 extra AI cost because while testing I hit the Gemini daily quota twice 💀😂).</p>
<h2 id="heading-step-4-the-big-reveal">🎉 Step 4  The Big Reveal</h2>
<p>Finally, the program prints a <strong>beautiful Secret Santa reveal</strong>:</p>
<ul>
<li><p>Who is gifting whom</p>
</li>
<li><p>Why they were paired</p>
</li>
<li><p>And what gift matches their emotional state</p>
</li>
</ul>
<p>Simple, wholesome, and powered by Cognees memory + Geminis reasoning.</p>
<hr />
<h2 id="heading-try-it-yourself-run-the-emotion-aware-secret-santa-on-your-machine">Try It Yourself 🎄  Run the Emotion-Aware Secret Santa on Your Machine</h2>
<p>Ive open-sourced the entire project so you can explore, modify, and have fun with it.<br />The code is available here:</p>
<p>👉 <strong>GitHub Repo:</strong> <a target="_blank" href="https://github.com/Pravesh-Sudha/secret-santa-cognee">https://github.com/Pravesh-Sudha/secret-santa-cognee</a></p>
<p>Clone it to your system and youre ready to get started.</p>
<h2 id="heading-step-1-get-your-gemini-api-key">🔑 Step 1  Get Your Gemini API Key</h2>
<p>To run this project, youll need a <strong>Gemini API key</strong>.<br />The good news? <strong>Google AI Studio gives you one for free.</strong></p>
<p>Once you have your key:</p>
<ol>
<li><p>Inside the project directory, create a <code>.env</code> file</p>
</li>
<li><p>Copy everything from <code>.env.example</code></p>
</li>
<li><p>Replace the values of:</p>
<ul>
<li><p><code>LLM_API_KEY</code></p>
</li>
<li><p><code>EMBEDDING_API_KEY</code><br />  with your Gemini key</p>
</li>
</ul>
</li>
</ol>
<p>And boom  the setup is done.</p>
<h2 id="heading-step-2-install-dependencies">🔧 Step 2  Install Dependencies</h2>
<p>Inside your project directory, run:</p>
<pre><code class="lang-bash">uv sync
</code></pre>
<p>This will install all required dependencies cleanly.</p>
<h2 id="heading-step-3-customise-your-friends-amp-gifts">📝 Step 3  Customise Your Friends &amp; Gifts</h2>
<p>You can now explore the code and make the project your own:</p>
<h3 id="heading-add-your-own-friends">👥 Add your own friends</h3>
<p>Open:</p>
<pre><code class="lang-bash">data/friends.json
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765293216920/eed109a1-d598-4acd-bc98-192e09b0e753.png" alt class="image--center mx-auto" /></p>
<p>Add your friends and their mood descriptions.<br />(Tip: try to keep it max <strong>4 friends</strong>, otherwise you may hit the Gemini daily quota like I did 😭😂)</p>
<h3 id="heading-customise-the-gifts">🎁 Customise the gifts</h3>
<p>Inside:</p>
<pre><code class="lang-bash">gift_gen.py
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765293226961/a761470d-243f-48d2-9f41-1775bd35d12f.png" alt class="image--center mx-auto" /></p>
<p>You can update gifts for each emotion to make them more fun, personal, or chaotic  your call.</p>
<h2 id="heading-step-4-run-the-project"> Step 4  Run the Project</h2>
<p>Once everything is set up, run:</p>
<pre><code class="lang-bash">uv run main.py
</code></pre>
<p>The program takes around <strong>23 minutes</strong>, and then</p>
<p>🎉 You get a full Secret Santa reveal right in your terminal!</p>
<ul>
<li><p>Who got whom</p>
</li>
<li><p>Their emotional reasoning</p>
</li>
<li><p>And the perfect gift suggestion</p>
</li>
</ul>
<p>All powered by Cognee + Gemini.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765291668541/22c57011-b959-4c6e-9944-16197aa6222a.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p><strong>NOTE:</strong><br />Initially, I planned to generate gifts using Gemini too<br />but Geminis Requests per Minute limit looked at me and said:<br />Not today, brother.</p>
<p>So I switched to a local gift list  zero extra AI cost, much more reliable.</p>
</blockquote>
<hr />
<h2 id="heading-video-demonstration">📽 Video Demonstration</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/86eA3UuxA54">https://youtu.be/86eA3UuxA54</a></div>
<p> </p>
<hr />
<h2 id="heading-conclusion-building-with-cognee-is-just-too-much-fun">🎄 Conclusion  Building with Cognee Is Just Too Much Fun</h2>
<p>This Secret Santa Mini Challenge by Cognee was the perfect excuse to experiment, break things, fix things, hit API limits twice 😭, and eventually build something that felt genuinely <em>personal</em>.</p>
<p>Using <strong>Cognee as the memory layer</strong> + <strong>Gemini 2.5 Flash for reasoning</strong> turned a simple holiday tradition into a small emotionally aware AI system  and honestly, thats the kind of playful innovation that makes me love building these projects.</p>
<p>If you try it out, tweak it, or turn it into something wild and creative, Id genuinely love to see it.<br />And big shoutout to the Cognee team for organizing such a wholesome challenge and continuing to ship amazing updates to the ecosystem.</p>
<p>More AI projects, more experiments, and more community fun coming soon.<br />Till then  keep building, keep learning, and keep vibing. </p>
<h2 id="heading-connect-with-me">🌐 Connect With Me</h2>
<p>If you enjoyed this project or want to follow my DevOps + AI journey, find me here:</p>
<ul>
<li><p><strong>LinkedIn:</strong> <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha/">https://www.linkedin.com/in/pravesh-sudha/</a></p>
</li>
<li><p><strong>Twitter/X:</strong> <a target="_blank" href="https://x.com/praveshstwt">https://x.com/praveshstwt</a></p>
</li>
<li><p><strong>YouTube:</strong> <a target="_blank" href="https://www.youtube.com/@pravesh-sudha">https://www.youtube.com/@pravesh-sudha</a></p>
</li>
</ul>
<p>See you in the next build! 🚀</p>
]]></description><link>https://blog.praveshsudha.com/how-i-created-an-ai-powered-secret-santa-using-cognee-as-the-memory-layer</link><guid isPermaLink="true">https://blog.praveshsudha.com/how-i-created-an-ai-powered-secret-santa-using-cognee-as-the-memory-layer</guid><category><![CDATA[AI]]></category><category><![CDATA[llm]]></category><category><![CDATA[RAG ]]></category><category><![CDATA[memory]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[How I Built My Terraform Portfolio: Projects, Repos, and Lessons Learned]]></title><description><![CDATA[<h3 id="heading-welcome-devs"><strong>Welcome Devs,</strong></h3>
<p>Today, were not spinning up infrastructure, writing HCL, or fixing a broken state file (for once 😅).<br />Instead, were looking back at the last <strong>8 months</strong>a journey filled with learning, building, breaking things, fixing them again, and slowly becoming that Terraform guy in my circle.</p>
<p>It all started in <strong>April 2025</strong>, when one of my LinkedIn buddies shared that he got selected as a <strong>HashiCorp Ambassador</strong>.<br />That was the first time I genuinely thought:<br /><em>Damn, this looks exciting. Why dont I aim for it too?</em></p>
<p>Just a month before thatin <strong>March 2025</strong>I had been selected as an <strong>AWS Community Builder</strong>, and around the same time, I launched my <strong>YouTube channel</strong> to share DevOps and Infra-as-Code tutorials with the world.<br />Slowly, people started watching, sharing, and sending messages saying they actually deployed things using my guides.</p>
<p>So officially, in <strong>May 2025</strong>, I decided:</p>
<p>🔥 <strong>Im going to prepare for HashiCorp Ambassador 2026. Lets do this seriously.</strong></p>
<p>Fast-forward 8 months<br />Ive built multiple Terraform projects, contributed to open-source repos, and crossed:</p>
<ul>
<li><p><strong>20,000+ views on my blogs</strong></p>
</li>
<li><p><strong>16,000+ views on YouTube</strong></p>
</li>
<li><p><strong>400+ awesome subscribers</strong></p>
</li>
</ul>
<p>More importantly, I went from Terraform looks complicated to Terraform is my comfort zone.</p>
<p>And today, Im sharing the <strong>exact resources, projects, repos, and lessons</strong> that helped me go from <strong>Zero  Hero</strong> in Terraform  and will help you too.</p>
<hr />
<h2 id="heading-1-building-a-strong-terraform-foundation"><strong>1. Building a Strong Terraform Foundation</strong></h2>
<p>Even though I already had some familiarity with Terraform (thanks to scattered YouTube videos and random experiments), I decided to start <strong>from absolute zero</strong>because if Im going to teach something, I need to understand it deeply myself.</p>
<p>So the first thing I created was a <strong>Getting Started with Terraform</strong> guide.<br />This wasnt just another intro blog. My goal was to help beginners understand:</p>
<ul>
<li><p>What Terraform really <em>is</em></p>
</li>
<li><p>Why DevOps engineers rely on IaC</p>
</li>
<li><p>How providers work</p>
</li>
<li><p>Structuring a project with <a target="_blank" href="http://main.tf"><code>main.tf</code></a>, <a target="_blank" href="http://variables.tf"><code>variables.tf</code></a>, <a target="_blank" href="http://outputs.tf"><code>outputs.tf</code></a></p>
</li>
<li><p>What configuration means in practice</p>
</li>
</ul>
<p>Basically, the fundamentals you <em>must</em> know before touching any cloud resource.<br />If you are new, you can read the same guide here:<br />👉 <a target="_blank" href="https://blog.praveshsudha.com/getting-started-with-terraform-a-beginners-guide"><strong>Getting Started with Terraform  A Beginners Guide</strong></a></p>
<p>Once the basics were sorted, I went deeper into the most important part of Terraform<br />the thing that causes 90% of people stress when it breaks:</p>
<h3 id="heading-the-terraform-state-file"><strong>The Terraform State File</strong></h3>
<p>Where to store it?<br />How to keep it safe?<br />How to ensure teams dont overwrite each others state?</p>
<p>I wrote a complete blog explaining how to use <strong>AWS S3 + DynamoDB</strong> as a rock-solid remote backend for production-grade Terraform.<br />I even created a YouTube demo for those who prefer watching over reading.</p>
<p>📘 <strong>Blog</strong>: <a target="_blank" href="https://blog.praveshsudha.com/where-should-you-store-terraform-state-files-for-maximum-efficiency"><em>Where Should You Store Terraform State Files for Maximum Efficiency?</em></a><br />🎥 <strong>Video</strong>: <em>Terraform Remote Backend on AWS (S3 + DynamoDB)</em></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/_pjpx6rsxn4">https://youtu.be/_pjpx6rsxn4</a></div>
<p> </p>
<p>These two resources became the backbone of my Terraform foundationnot just for me, but for everyone following along.<br />Before I moved to advanced projects, pipelines, and multi-environment infra this foundation is what made everything else 10 easier.</p>
<hr />
<h2 id="heading-2-core-concepts-amp-best-practices-the-part-everyone-skips-but-shouldnt"><strong>2. Core Concepts &amp; Best Practices (The Part Everyone Skips But Shouldnt)</strong></h2>
<p>Once the foundations were clear, I moved into the phase where most beginners either get overwhelmed or fall in love with Terraform.<br />For me, this was the moment things <em>clicked</em>.</p>
<p>Because understanding Terraform is not just learning commands  its learning <strong>structure</strong>, <strong>security</strong>, and <strong>scalability</strong>.</p>
<h3 id="heading-a-terraform-modules-the-secret-sauce"><strong>a) Terraform Modules  The Secret Sauce</strong></h3>
<p>One of the first Aha! moments for me was understanding <strong>modules</strong>.<br />If theres one thing that separates beginners from pros, its this.</p>
<p>Modules teach you:</p>
<ul>
<li><p>How to avoid repetitive code</p>
</li>
<li><p>How to scale infra easily</p>
</li>
<li><p>How to structure projects cleanly</p>
</li>
<li><p>How teams collaborate efficiently</p>
</li>
</ul>
<p>I wrote an in-depth blog breaking this down and also created a complete YouTube walkthrough.</p>
<p>📘 <strong>Blog</strong>: <a target="_blank" href="https://blog.praveshsudha.com/terraform-modules-the-secret-sauce-to-scalable-infrastructure"><em>Terraform Modules  The Secret Sauce to Scalable Infrastructure</em></a><br />🎥 <strong>Video</strong>: <em>Terraform Module Explained with Demo</em></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/_qmebISFHM8">https://youtu.be/_qmebISFHM8</a></div>
<p> </p>
<p>Learning modules improved how I thought about <em>every</em> project afterward.</p>
<h3 id="heading-b-terraform-security-practices-because-iac-without-security-is-just-a-script"><strong>b) Terraform Security Practices (Because IaC Without Security is Just a Script)</strong></h3>
<p>Infrastructure automation is amazingbut it also means if you make a mistake, you can take down <strong>everything</strong>instantly.</p>
<p>So I created a dedicated guide covering the <strong>Top 5 DevSecOps-focused Terraform security practices</strong>, including:</p>
<ul>
<li><p>Provider credential validation</p>
</li>
<li><p>Role-based access</p>
</li>
<li><p>Scanning Terraform code for vulnerabilities</p>
</li>
<li><p>Remote backend security</p>
</li>
<li><p>Avoiding manual state file edits (my favourite rule 😅)</p>
</li>
</ul>
<p>📘 <strong>Blog</strong>: <a target="_blank" href="https://blog.praveshsudha.com/terraform-meets-devsecops-5-security-practices-you-cant-afford-to-ignore"><em>Terraform Meets DevSecOps  5 Security Practices You Cant Ignore</em></a><br />🎥 <strong>Video</strong>: <em>Terraform Security Best Practices</em></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/AgFcX-H3SJU">https://youtu.be/AgFcX-H3SJU</a></div>
<p> </p>
<p>This was the point where both my blog and channel started gaining real tractionpeople were searching for practical, real-world Terraform security advice.</p>
<h3 id="heading-c-terraform-workspaces-the-most-underrated-feature-ever"><strong>c) Terraform Workspaces  The Most Underrated Feature Ever</strong></h3>
<p>Workspaces are like that quiet student in class who actually knows everything.</p>
<p>Nobody talks about them<br />until they see how powerful multi-environment deployment becomes with a single command:</p>
<pre><code class="lang-python">terraform workspace select dev
terraform workspace select prod
</code></pre>
<p>Workspaces changed how I approached multi-environment infra forever.</p>
<p>📘 <strong>Blog</strong>: <a target="_blank" href="https://blog.praveshsudha.com/terraform-workspaces-and-multi-environment-deployments"><em>Terraform Workspaces &amp; Multi-Environment Deployments</em></a><br />🎥 <strong>Video</strong>: <em>Complete Workspace Tutorial on AWS</em></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/W0x42D34OMw">https://youtu.be/W0x42D34OMw</a></div>
<p> </p>
<h3 id="heading-d-terraform-meets-ansible-iac-configuration-management"><strong>d) Terraform Meets Ansible (IaC + Configuration Management = 🔥)</strong></h3>
<p>The next logical step was learning how Terraform works <em>with</em> configuration management tools.</p>
<p>Terraform builds the infra.<br />Ansible configures what lives <strong>inside</strong> the infra.</p>
<p>I created a hands-on guide where I showed exactly how Terraform provisions servers and Ansible configures them  a production-ready combo.</p>
<p>📘 <strong>Blog</strong>: <a target="_blank" href="https://blog.praveshsudha.com/terraform-meets-ansible-automating-multi-environment-infrastructure-on-aws"><em>Terraform Meets Ansible  Automating Multi-Environment Infra on AWS</em></a><br />🎥 <strong>Video</strong>: <em>Terraform + Ansible Full Demo</em></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/tKlGTGye_hk">https://youtu.be/tKlGTGye_hk</a></div>
<p> </p>
<p>This section was a turning point in my portfolio because it bridged two worldsInfra-as-Code and Configuration Management.</p>
<hr />
<h2 id="heading-3-cicd-automation-amp-modern-devops-where-everything-comes-together"><strong>3. CI/CD, Automation &amp; Modern DevOps (Where Everything Comes Together)</strong></h2>
<p>After covering modules, security, workspaces, and multi-environment provisioning, it was time to bring Terraform into the real world <br /><strong>the world of automation, pipelines, and DevOps workflows.</strong></p>
<p>I had already used Terraform with <strong>Jenkins</strong> and even <strong>GitLab CI/CD</strong> in my previous projects (outside this series), so this time I wanted to do something fresh.</p>
<p>And whats more modern DevOps than using <strong>GitHub Actions</strong>?</p>
<p>So I decided to build a real project that connects:</p>
<ul>
<li><p>Terraform</p>
</li>
<li><p>GitHub Actions</p>
</li>
<li><p>AWS</p>
</li>
<li><p>And a production-style multi-component application (Node.js + Redis + Nginx)</p>
</li>
</ul>
<h3 id="heading-the-project-request-counter-app-deployment-on-aws"><strong>The Project: Request Counter App Deployment on AWS</strong></h3>
<p>This wasnt just a hello world project.<br />It involved:</p>
<ul>
<li><p>A <strong>Node.js API</strong> handling increment-count logic</p>
</li>
<li><p><strong>Redis</strong> to store the counter</p>
</li>
<li><p><strong>Nginx</strong> as a reverse proxy</p>
</li>
<li><p>Terraform to provision AWS infra</p>
</li>
<li><p>GitHub Actions to automatically deploy everything on push</p>
</li>
</ul>
<p>In short, a full <strong>Infrastructure + Application + CI/CD</strong> pipeline  the kind of thing you actually do in real companies.</p>
<p>I documented the entire workflow so anyone can recreate it step-by-step.</p>
<p>📘 <strong>Blog</strong>: <a target="_blank" href="https://blog.praveshsudha.com/cicd-for-terraform-with-github-actions-deploying-a-nodejs-redis-app-on-aws"><em>CI/CD for Terraform with GitHub Actions  Deploying a Node.js + Redis App on AWS</em></a><br />🎥 <strong>Video</strong>: <em>GitHub Actions + Terraform Full Pipeline Demo</em></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/D0w_1a3fYhM">https://youtu.be/D0w_1a3fYhM</a></div>
<p> </p>
<p>This project helped me understand how Terraform behaves inside a pipeline: the checks, the backend locks, the state consistency, the secret management  all of it.<br />And more importantly, it leveled up my DevOps portfolio significantly.</p>
<p>Real-world + automated + cloud-native = a perfect trifecta.</p>
<hr />
<h2 id="heading-4-cloud-native-amp-multi-tier-application-deployments-intermediate-projects"><strong>4. Cloud-Native &amp; Multi-Tier Application Deployments (Intermediate Projects)</strong></h2>
<p>Once I became comfortable with Terraform fundamentals, DevSecOps practices, and CI/CD automation, it was time to step into the <strong>cloud-native world</strong>  where real production systems live and breathe.</p>
<p>This phase of my journey pushed me out of my comfort zone, because now I wasnt just creating small demos.<br />I was building <strong>multi-tier</strong>, <strong>scalable</strong>, <strong>mission-critical</strong> infrastructures  the kind youd actually find in modern companies.</p>
<h3 id="heading-a-deploying-a-three-tier-application-on-aws-eks-with-best-practices"><strong>a) Deploying a Three-Tier Application on AWS EKS (with Best Practices)</strong></h3>
<p>Kubernetes + Terraform is a whole universe on its own.<br />So to challenge myself, I decided to deploy a complete <strong>three-tier application</strong> on <strong>AWS EKS</strong>, fully automated using Terraform  following all real-world best practices.</p>
<p>This included:</p>
<ul>
<li><p>VPC with subnets</p>
</li>
<li><p>Managed node groups</p>
</li>
<li><p>Ingress controllers</p>
</li>
<li><p>Load balancers</p>
</li>
<li><p>Namespace separation</p>
</li>
<li><p>Secrets + configs</p>
</li>
<li><p>And a proper service-to-service communication workflow</p>
</li>
</ul>
<p>It was one of the most complex setups Id built at that point  and the most rewarding.</p>
<p>📘 <strong>Blog</strong>: <a target="_blank" href="https://blog.praveshsudha.com/learn-how-to-deploy-a-three-tier-application-on-aws-eks-using-terraform-with-best-practices"><em>Deploy a Three-Tier Application on AWS EKS using Terraform (Best Practices)</em></a><br />🎥 <strong>Video</strong>: <em>EKS + Terraform Deployment Tutorial</em></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/n8BQ3XlCiKE">https://youtu.be/n8BQ3XlCiKE</a></div>
<p> </p>
<p><em>(This video crossed 1,000+ viewsfirst big milestone!)</em></p>
<hr />
<h3 id="heading-b-deploying-a-highly-scalable-amp-available-django-application-aws-terraform"><strong>b) Deploying a Highly Scalable &amp; Available Django Application (AWS + Terraform)</strong></h3>
<p>After EKS, I wanted to explore another real-world architecture  something more traditional, but equally production-grade.</p>
<p>So I built a <strong>highly scalable Django application</strong> hosted on AWS using Terraform.<br />This project included all the standard AWS building blocks youd expect in a real enterprise setup:</p>
<ul>
<li><p><strong>RDS</strong> for relational database</p>
</li>
<li><p><strong>Secrets Manager</strong> for secure credentials</p>
</li>
<li><p><strong>Application Load Balancer</strong></p>
</li>
<li><p><strong>Auto Scaling Group</strong></p>
</li>
<li><p><strong>EC2 instances</strong> for compute</p>
</li>
<li><p><strong>Private/public subnets</strong></p>
</li>
<li><p>Proper <strong>network isolation</strong> and <strong>high availability</strong></p>
</li>
</ul>
<p>This architecture reflected how an actual company would deploy a Python backend in production.</p>
<p>📘 <strong>Blog</strong>: <a target="_blank" href="https://blog.praveshsudha.com/deploying-a-highly-scalable-and-available-django-application-on-aws-with-terraform"><em>Deploying a Highly Scalable Django Application on AWS with Terraform</em></a><br />🎥 <strong>Video</strong>: <em>Django + AWS + Terraform Full Architecture Demo</em></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/idUGgFry72k">https://youtu.be/idUGgFry72k</a></div>
<p> </p>
<p>This stage of my Terraform portfolio strengthened my confidence in handling <strong>full-stack cloud-native deployments</strong>  not just isolated resources.</p>
<hr />
<h2 id="heading-5-game-deployments-amp-creative-infra-projects-the-standout-pieces"><strong>5. Game Deployments &amp; Creative Infra Projects (The Standout Pieces)</strong></h2>
<p>Now, lets talk about the most <em>fun</em> part of my Terraform journey  the projects that truly made my portfolio stand out.</p>
<p>Ive been a gamer since childhood, and fun fact:<br />I actually bought my <strong>PS5</strong> using the prize money I won as the <a target="_blank" href="http://Dev.to"><strong>Dev.to</strong></a> <strong>Runner-H Challenge Winner</strong> (Scolded by my Dad for this unjust purchase, but HELL YEAH!!, it was worth it.)<br />So, naturally, I decided to merge my love for gaming with my DevOps career.</p>
<p>The result?<br />Some of the most unique, creative, and highly engaging Terraform projects Ive ever built.</p>
<h3 id="heading-a-deploying-super-mario-bros-on-aws-eks-award-winning-project"><strong>a) Deploying Super Mario Bros on AWS EKS (Award-Winning Project)</strong></h3>
<p>This one will always be special.</p>
<p>I deployed the classic <strong>Super Mario Bros</strong> game on <strong>AWS EKS</strong> using Terraform  complete with pods, services, ingress, and Kubernetes best practices.</p>
<p>This project wasnt just a hit among developers <br />it actually helped me <strong>win the AWS Containers 4x4 Challenge</strong>, and I received some amazing premium swags from the community.</p>
<p>📘 <strong>Blog</strong>: <a target="_blank" href="https://blog.praveshsudha.com/how-to-use-terraform-to-deploy-super-mario-on-aws-eks-detailed-instructions"><em>Deploy Super Mario on AWS EKS using Terraform (Step-by-Step)</em></a><br />🎥 <strong>Video</strong>: <em>Super Mario on Kubernetes Demo</em></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/A6TLZU_CtjY">https://youtu.be/A6TLZU_CtjY</a></div>
<p> </p>
<h3 id="heading-b-deploying-tetris-on-aws-ecs-with-terraform"><strong>b) Deploying Tetris on AWS ECS with Terraform</strong></h3>
<p>After nailing Mario, I didnt want to stop.</p>
<p>Next, I deployed <strong>Tetris</strong>  this time using Amazon <strong>ECS</strong> with Terraform.<br />This project explores how containerized applications run on ECS Fargate, how services scale, and how ALBs route traffic.</p>
<p>📘 <strong>Blog</strong>: <a target="_blank" href="https://blog.praveshsudha.com/how-to-deploy-a-tetris-game-on-aws-ecs-with-terraform"><em>How to Deploy a Tetris Game on AWS ECS with Terraform</em></a><br />🎥 <strong>Video</strong>: <em>Tetris on ECS  Full Walkthrough</em></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/tVuqBZfU04M">https://youtu.be/tVuqBZfU04M</a></div>
<p> </p>
<h3 id="heading-c-deploying-cognee-ai-starter-app-on-ecs-collaboration-project"><strong>c) Deploying Cognee AI Starter App on ECS (Collaboration Project)</strong></h3>
<p>One of the most exciting collaborations of my journey was with <strong>Cognee AI</strong>  the memory layer of AI Agents.</p>
<p>I built a <strong>Flask-based Cognee Starter Application from scratch</strong>, containerized it, and deployed it on AWS ECS using Terraform as the IaC backbone.</p>
<p>This project taught me a lot about real product deployment workflows, container orchestration, and DevOps collaboration.</p>
<p>📘 <strong>Blog</strong>: <a target="_blank" href="https://blog.praveshsudha.com/deploying-cognee-ai-starter-app-on-aws-ecs-using-terraform"><em>Deploying Cognee AI Starter App on ECS with Terraform</em></a><br />🎥 <strong>Video</strong>: <em>Cognee AI Deployment Demo</em></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/uvkwXSUJ6Hw">https://youtu.be/uvkwXSUJ6Hw</a></div>
<p> </p>
<h3 id="heading-d-deploying-an-amazon-clone-on-aws-amazon-on-aws-meta-enough"><strong>d) Deploying an Amazon Clone on AWS (Amazon on AWS  Meta Enough?)</strong></h3>
<p>To wrap up this creative phase, I decided to do something hilarious and ambitious:</p>
<p><strong>Deploying an Amazon Clone on AWS itself.</strong><br />Yes  Amazon on AWS. A true full-circle moment. 😂</p>
<p>This project used:</p>
<ul>
<li><p>Jenkins for CI/CD</p>
</li>
<li><p>Terraform for infra</p>
</li>
<li><p>AWS services like EC2, ALB, ASG, RDS, VPC</p>
</li>
<li><p>And a full clone app architecture setup</p>
</li>
</ul>
<p>📘 <strong>Blog</strong>: <a target="_blank" href="https://blog.praveshsudha.com/deploy-an-amazon-clone-on-aws-a-complete-guide-with-jenkins-and-terraform"><em>Deploy an Amazon Clone on AWS (Complete Guide with Jenkins + Terraform)</em>  
</a>🎥 <strong>Video</strong>: <em>Amazon Clone Deployment Tutorial</em></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/YSxoJH6CWE4">https://youtu.be/YSxoJH6CWE4</a></div>
<p> </p>
<p>These projects became the highlights of my Terraform portfolio  the kind that make recruiters pause, smile, and think:<br /><em>Okay, this person actually enjoys building stuff.</em></p>
<hr />
<h2 id="heading-6-reflection-amp-growth-the-end-of-this-portfolio-but-not-the-journey"><strong>6. Reflection &amp; Growth (The End of This Portfolio  But Not the Journey)</strong></h2>
<p>And now here we are.<br />Eight months later.<br />Countless blogs, videos, deployments, wins, failures, swags, and late-night debugging sessions later  I finally paused and asked myself a very simple question:</p>
<h3 id="heading-if-a-complete-beginner-asked-me-for-guidance-today-what-would-i-say"><strong>If a complete beginner asked me for guidance today what would I say?</strong></h3>
<p>Because when youre learning something as powerful as Terraform, the hardest part isnt understanding <code>.tf</code> files <br />its navigating the early confusion, the overwhelming docs, the trial-and-error, and the mistakes we all make.</p>
<p>So to answer that question, I created a dedicated piece:<br /><strong>the top 5 mistakes beginners make while learning Terraform</strong>  and how to avoid them.</p>
<p>📘 <strong>Blog</strong>: <a target="_blank" href="https://blog.praveshsudha.com/dont-touch-terraform-before-avoiding-these-5-rookie-mistakes"><em>Dont Touch Terraform Before Avoiding These 5 Rookie Mistakes</em></a><br />🎥 <strong>Video</strong>: <em>Top 5 Beginner Mistakes in Terraform</em></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/EgWaZpXhGI0">https://youtu.be/EgWaZpXhGI0</a></div>
<p> </p>
<p>This video and blog were my way of giving back to anyone starting the same journey I took in April 2025.<br />If I could go back in time, this is exactly what I would hand to myself.</p>
<h3 id="heading-the-terraform-guide-everything-in-one-place"><strong>The Terraform Guide (Everything in One Place)</strong></h3>
<p>To make things easier for learners, I bundled every single blog  foundations, modules, state files, workspaces, EKS, ECS, CI/CD, everything  into a clean, structured series:</p>
<p>📚 <strong>Terraform Guide Series:</strong> <a target="_blank" href="https://blog.praveshsudha.com/series/terraform">All blogs in one place</a></p>
<p>This series now stands as a complete Zero-to-Hero path for anyone wanting to master Terraform using real projects.</p>
<h3 id="heading-terraform-playlist-all-video-demonstrations"><strong>Terraform Playlist (All Video Demonstrations)</strong></h3>
<p>And for visual learners, I created a dedicated playlist on YouTube containing every demo, from foundational projects to Kubernetes deployments and game infra:</p>
<p>🎥 <strong>Terraform Project Playlist</strong></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtube.com/playlist?list=PLlens1h3v6tcwtiXhjsCtkqh34E1E4toq&amp;si=4EWYSgsRmuhdDR9O">https://youtube.com/playlist?list=PLlens1h3v6tcwtiXhjsCtkqh34E1E4toq&amp;si=4EWYSgsRmuhdDR9O</a></div>
<p> </p>
<hr />
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>As I wrap up this 8-month Terraform journey, one thing is clear: this portfolio isnt just a collection of <code>.tf</code> files  its a reflection of growth, discipline, creativity, and identity.</p>
<p>When I started back in April 2025, I didnt know Terraform beyond the basics. I simply made a decision in May 2025:<br /><strong>I will learn this tool properly and build something meaningful.</strong></p>
<p>Eight months later, heres what that decision turned into:</p>
<ul>
<li><p><strong>I learned Terraform from scratch</strong></p>
</li>
<li><p><strong>Built real-world, production-style cloud infrastructures</strong></p>
</li>
<li><p><strong>Won community challenges</strong>, including AWS Containers 4x4</p>
</li>
<li><p><strong>Collaborated on AI projects</strong>, like the Cognee AI deployment</p>
</li>
<li><p><strong>Published blogs that crossed 20,000+ views</strong></p>
</li>
<li><p><strong>Grew my YouTube channel to 16,000+ views and 400+ subscribers</strong></p>
</li>
<li><p>And most importantly, <strong>built projects that genuinely reflect who I am as a DevOps engineer</strong></p>
</li>
</ul>
<p>This journey taught me that you dont need a perfect starting point  you just need a consistent one.<br />It showed me that sharing your learning publicly builds connection, credibility, and confidence.<br />And it proved that passion projects (like deploying Super Mario and Tetris using Terraform) can teach you more than any textbook ever will.</p>
<p>If youre just beginning your Terraform journey, Ill leave you with this:</p>
<h3 id="heading-start-small-stay-consistent-build-publicly">Start small. Stay consistent. Build publicly.</h3>
<p>Because the internet remembers builders  not perfectionists.</p>
<p>Thank you for reading my story.<br />I hope this motivates you to start building your own.</p>
<h3 id="heading-connect-with-me"><strong>📌 Connect With Me</strong></h3>
<p>🔗 <strong>LinkedIn:</strong> <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha/">https://www.linkedin.com/in/pravesh-sudha/</a><br />🐦 <strong>Twitter/X:</strong> <a target="_blank" href="https://x.com/praveshstwt">https://x.com/praveshstwt</a><br />📺 <strong>YouTube:</strong> <a target="_blank" href="https://www.youtube.com/@pravesh-sudha">https://www.youtube.com/@pravesh-sudha</a><br />🌐 <strong>Website/Blogs:</strong> <a target="_blank" href="https://blog.praveshsudha.com/">https://blog.praveshsudha.com</a></p>
<p>Lets keep learning and building  together.</p>
]]></description><link>https://blog.praveshsudha.com/how-i-built-my-terraform-portfolio-projects-repos-and-lessons-learned</link><guid isPermaLink="true">https://blog.praveshsudha.com/how-i-built-my-terraform-portfolio-projects-repos-and-lessons-learned</guid><category><![CDATA[Devops]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[projects]]></category><category><![CDATA[#learning-in-public]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[Don’t Touch Terraform Before Avoiding These 5 Rookie Mistakes]]></title><description><![CDATA[<h2 id="heading-introduction">🌟 Introduction</h2>
<p>Welcome back, Devs!<br />A few weeks ago, I shared <a target="_blank" href="https://blog.praveshsudha.com/terraform-meets-devsecops-5-security-practices-you-cant-afford-to-ignore"><strong>5 Best Security Practices for Terraform</strong></a>. That guide was more for folks who already work with Terraform day in and day out  the ones managing infra at scale, reviewing modules, and pushing changes through CI/CD pipelines.</p>
<p>But what about the beginners?</p>
<p>The ones who just wrapped up the basics of DevOps  Linux, Networking, Docker, Git  and are now stepping into the cloud world. For them, Infrastructure as Code can feel well, intimidating at first. Terraform looks simple from the outside, but when you actually start writing configurations, the learning curve hits hard. And its totally normal  IaC is a tough nut to crack initially.</p>
<p>So to make your journey smoother and confusion-free, Im back with todays blog, where well break down the <strong>Top 5 Mistakes Beginners Make While Learning Terraform</strong>  and how you can avoid them.</p>
<p>Without further ado <strong>lets get started!</strong></p>
<hr />
<h2 id="heading-before-we-dive-in-do-this-first"><strong>🌟 Before We Dive In Do This First</strong></h2>
<p>Before we dive deep into the mistakes, lets get one thing out of the way  <strong>make sure youve got Terraform ready on your system</strong>. No matter how good the guide is, nothing makes sense unless youve actually installed the CLI and can run those sweet <code>terraform init</code> and <code>terraform apply</code> commands.</p>
<p>Since Im an <strong>AWS Community Builder</strong>, I usually stick to <strong>AWS</strong> as my cloud provider for demos and explanations. If youre following along, youll need to connect Terraform to your AWS account. You can do that in two ways:</p>
<ol>
<li><p><strong>Export AWS credentials directly in your terminal</strong><br /> (<code>AWS_ACCESS_KEY_ID</code> + <code>AWS_SECRET_ACCESS_KEY</code>)<br /> Works fine, but not the best option for long-term use.</p>
</li>
<li><p><strong>Install the AWS CLI (recommended)</strong><br /> This is cleaner, more secure, and helps you manage multiple profiles easily.<br /> Just create an IAM user with the right permissions and run <code>aws configure</code>.</p>
</li>
</ol>
<p>If you dont know how to set that up  dont worry. Ive already covered the entire process in my <strong>Beginners Guide to Terraform</strong>. You can check it out here:<br />👉 <a target="_blank" href="https://blog.praveshsudha.com/getting-started-with-terraform-a-beginners-guide#heading-step-1-install-the-aws-cli">https://blog.praveshsudha.com/getting-started-with-terraform-a-beginners-guide#heading-step-1-install-the-aws-cli</a></p>
<p>Once your CLI and AWS credentials are set, youre all ready to explore the mistakes beginners make and how you can avoid them.</p>
<hr />
<h2 id="heading-mistake-1-treating-terraform-like-a-scripting-tool"><strong> Mistake 1: Treating Terraform Like a Scripting Tool</strong></h2>
<p>This is hands-down the most common beginner mistake.</p>
<p>Most folks who get introduced to Terraform have already touched at least one programming language  Python, Go, Rust, maybe even Bash scripting. And because of that prior experience, they naturally assume Terraform will behave the same way:</p>
<p><strong>I wrote line 1 first, so Terraform will execute that first right?</strong><br />Nope. Not at all.</p>
<p>Terraform is <strong>not</strong> a scripting tool.<br />It doesnt run your code line-by-line.<br />It doesnt care about the order in which you wrote your resources.</p>
<p>Terraform is <strong>declarative</strong>, not <strong>imperative</strong>.</p>
<p>Instead of following your code from top to bottom, Terraform reads <em>all</em> the resources in your configuration and builds something called a <strong>dependency graph</strong>. This graph tells Terraform which resources depend on which other resources  and <em>that</em> determines the execution order.</p>
<p>So beginners often get confused:</p>
<blockquote>
<p>Why is Terraform not creating things in the order I wrote them?</p>
</blockquote>
<p>Because Terraform is smarter than that. It looks at dependencies, not line numbers.</p>
<p><strong>Moral of the story:</strong><br />With Terraform, you dont explain <em>how</em> to create each step. You only declare <em>what</em> you want  like an EC2 instance, security groups, and a VPC  and Terraform figures out the how automatically.</p>
<p>Once you understand this mindset shift, Terraform becomes much easier (and honestly, more fun) to work with.</p>
<hr />
<h2 id="heading-mistake-2-hardcoding-everything-instead-of-using-variables"> <strong>Mistake 2: Hardcoding Everything Instead of Using Variables</strong></h2>
<p>If youre anything like me, you love going through official docs whenever you start learning a new tech. And if you visit the Terraform documentation, youll notice something instantly:</p>
<p>Most of their example configurations <strong>hardcode</strong> values.</p>
<p>Region? Hardcoded.<br />AMI ID? (Got from data type).<br />App name, DB name, instance type? All hardcoded.</p>
<p>And honestly  that's fine <em>when you're just starting out</em>. Hardcoding helps beginners understand the structure of a resource without worrying about variables, files, or module structures.</p>
<p>But once you get a basic understanding of how Terraform works<br /><strong>hardcoding becomes your worst enemy.</strong></p>
<p>Let me explain with a simple example.</p>
<p>Say youve deployed a two-tier application on AWS  your VPC, ALB, EC2, RDS, security groups, everything. Now theres a new feature request, and you dont want to deploy the changes in the same environment.</p>
<p>So you decide to replicate the environment.</p>
<p>If everything is hardcoded, youre now stuck manually changing names and identifiers for every single resource.<br />Painful.<br />Time-consuming.<br />Highly prone to breaking things.</p>
<h3 id="heading-the-fix-use-variables-like-a-pro"><strong>The Fix: Use Variables Like a Pro</strong></h3>
<p>Store all your important attributes inside a <code>variables.tf</code> file.<br />This gives you a clean, centralized location for every configuration value your entire project depends on.</p>
<p>In the above scenario, if your resources use variables, all you need to do is change a value in one place  and Terraform will automatically reflect it everywhere.</p>
<p>Heres how you define a variable in Terraform:</p>
<pre><code class="lang-bash">variable <span class="hljs-string">"instance_type"</span> {
  default = <span class="hljs-string">"t2.micro"</span>  <span class="hljs-comment"># EC2 Instance Type</span>
}
</code></pre>
<p>Clean, reusable, scalable  and exactly how real-world Terraform is written.</p>
<hr />
<h2 id="heading-mistake-3-mixing-manual-changes-through-the-aws-console"><strong> Mistake 3: Mixing Manual Changes Through the AWS Console</strong></h2>
<p>This one is a classic beginner trap.</p>
<p>When you provision infrastructure using Terraform, it stores a full blueprint of your resources inside the <strong>terraform.tfstate</strong> file. This file represents the <em>current state</em> of your infrastructure  basically Terraforms memory of what exists.</p>
<p>Now imagine you want to update something small, like renaming an EC2 instance.<br />You open the AWS Console, click on the instance, change the name, hit save, and boom  done.</p>
<p>Simple, right?</p>
<p><strong>Yes. But also very wrong.</strong></p>
<p>Making manual console changes creates something called <strong>drift</strong>  a mismatch between:</p>
<ul>
<li><p>what actually exists in AWS</p>
</li>
<li><p>what Terraform <em>thinks</em> exists (according to the state file)</p>
</li>
</ul>
<p>This drift becomes a huge headache because Terraform will now get confused about what's changed, what needs to be replaced, or what should not exist at all. For beginners, handling drift is even more overwhelming because it's not always obvious where things went wrong.</p>
<p>Heres the golden rule:</p>
<p>If you provision resources using Terraform,<br /><strong>update or delete them using Terraform only.</strong></p>
<p>Avoid the temptation of "quick fixes" in the AWS Console. A few clicks today can cause hours of debugging tomorrow.</p>
<h3 id="heading-moral-of-the-story"><strong>Moral of the story:</strong></h3>
<p>Treat Terraform as your <strong>single source of truth</strong>.<br />No ClickOps. No shortcuts.</p>
<hr />
<h2 id="heading-mistake-4-ignoring-terraform-resource-dependencies"> <strong>Mistake 4: Ignoring Terraform Resource Dependencies</strong></h2>
<p>Terraform is pretty smart when it comes to understanding relationships between resources. It automatically builds a <strong>dependency graph</strong> to figure out what needs to be created first and what depends on what.<br />Most of the time, this works beautifully.</p>
<p>But sometimes Terraform needs a little help.</p>
<p>There are scenarios where Terraform <em>cannot</em> infer dependencies on its own  especially when two resources dont have an obvious reference to each other. Thats where beginners get stuck.</p>
<p>For example:</p>
<ul>
<li><p>Applying an <strong>S3 Bucket Policy</strong> before the bucket is actually created</p>
</li>
<li><p>Applying <strong>IAM role attachments</strong> before the IAM role itself exists</p>
</li>
<li><p>Creating a <strong>Lambda permission</strong> before the Lambda function is ready</p>
</li>
</ul>
<p>In such cases, Terraform might try to create things in the wrong order, leading to errors.</p>
<h3 id="heading-the-fix-use-dependson-smartly"><strong>The Fix: Use</strong> <code>depends_on</code> Smartly</h3>
<p>Terraform gives us an escape hatch  the <code>depends_on</code> meta-argument.</p>
<p>With <code>depends_on</code>, you can explicitly tell Terraform:</p>
<blockquote>
<p>Hey, this resource should only be created <em>after</em> this other resource is fully ready.</p>
</blockquote>
<p>It ensures the correct and predictable order of execution.</p>
<p>Example use case:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_s3_bucket_policy"</span> <span class="hljs-string">"bucket_policy"</span> {
  bucket = aws_s3_bucket.my_bucket.id

  policy = data.aws_iam_policy_document.example.json

  depends_on = [
    aws_s3_bucket.my_bucket
  ]
}
</code></pre>
<p>Now Terraform knows for sure: <strong>Bucket first  Policy second.</strong></p>
<h3 id="heading-moral-of-the-story-1"><strong>Moral of the story:</strong></h3>
<p>Understand how Terraform creates its dependency graph, and use <code>depends_on</code> wisely when Terraform cant figure things out on its own.</p>
<hr />
<h2 id="heading-mistake-5-thinking-terraform-is-only-about-init-plan-and-apply"><strong> Mistake 5: Thinking Terraform Is Only About</strong> <code>init</code>, <code>plan</code>, and <code>apply</code></h2>
<p>I still remember when I started learning Terraform.<br />I watched a bunch of YouTube tutorials, and almost every single one followed the exact same pattern:</p>
<ul>
<li><p>Write config in <a target="_blank" href="http://main.tf"><code>main.</code></a><code>tf</code>, provider in <code>provider.tf</code>, vars in <code>variables.tf</code>, outputs in <code>outputs.tf</code></p>
</li>
<li><p>Run <code>terraform init</code></p>
</li>
<li><p>Run <code>terraform plan</code> (and lets be honest, most beginners skip this 😅)</p>
</li>
<li><p>Run <code>terraform apply --auto-approve</code></p>
</li>
<li><p>And at the end, <code>terraform destroy</code></p>
</li>
</ul>
<p>And thats it.<br />End of tutorial.<br />Done.<br />You now know Terraform. 🙃</p>
<p>Except thats not the full story.</p>
<p>Terraform is <strong>not</strong> just about those three commands. There are many other commands that help you:</p>
<ul>
<li><p>write cleaner, more readable code</p>
</li>
<li><p>avoid silly mistakes</p>
</li>
<li><p>prevent accidental deployments</p>
</li>
<li><p>stick to best practices from day one</p>
</li>
</ul>
<p>But beginners (including me back in the day) completely ignore them.</p>
<p>Here are a few essential ones:</p>
<h3 id="heading-terraform-plan-seriously-use-it"><strong>🔹</strong> <code>terraform plan</code> (Seriously, use it)</h3>
<p>Before applying, <em>always</em> check the plan.<br />It shows you what Terraform will create, modify, or delete.<br />Skipping this step is how disasters happen  and trust me, Ive been guilty of this too. 😅</p>
<h3 id="heading-terraform-fmt-fix-your-code-automatically"><strong>🔹</strong> <code>terraform fmt</code> (Fix your code automatically)</h3>
<p>Your code may work, but if it looks like a messy bowl of noodles, nobody wants to maintain it.<br /><code>terraform fmt</code> formats your HCL beautifully and keeps your files consistent.</p>
<h3 id="heading-terraform-validate-catch-misconfigurations-early"><strong>🔹</strong> <code>terraform validate</code> (Catch misconfigurations early)</h3>
<p>Sometimes your code looks right but contains subtle mistakes  wrong types, bad arguments, misplaced blocks, etc.<br /><code>terraform validate</code> helps detect these before you even think about applying them.</p>
<h3 id="heading-moral-of-the-story-2"><strong>Moral of the story:</strong></h3>
<p>Make <code>fmt</code>, <code>validate</code>, and <code>plan</code> a daily habit.<br />They will save you from so many unnecessary headaches.</p>
<hr />
<h2 id="heading-hands-on-deploying-a-static-portfolio-website-on-an-s3-bucket"><strong>🌟 Hands-On: Deploying a Static Portfolio Website on an S3 Bucket</strong></h2>
<p>Alright, thats enough theory  now lets actually get our hands dirty.<br />To make everything we discussed more practical, well deploy a <strong>static portfolio website</strong> on an S3 bucket using Terraform.</p>
<p>The complete code is available here:<br />👉 <a target="_blank" href="https://github.com/Pravesh-Sudha/terra-projects"><strong>https://github.com/Pravesh-Sudha/terra-projects</strong></a></p>
<p>Once you open the repo, navigate to the <code>terra-mistakes</code> directory. Inside it, youll find multiple Terraform files and a <code>static/</code> folder. The <code>static</code> directory contains two HTML files:</p>
<ul>
<li><p><strong>index.html</strong>  your main portfolio page</p>
</li>
<li><p><strong>error.html</strong>  fallback page for unexpected errors</p>
</li>
</ul>
<p>Inside the <code>main.tf</code>, were:</p>
<ul>
<li><p>creating an S3 bucket</p>
</li>
<li><p>enabling static website hosting</p>
</li>
<li><p>uploading both HTML files</p>
</li>
<li><p>attaching a bucket policy to allow public access</p>
</li>
</ul>
<p>Youll also notice something important:</p>
<ul>
<li><p>We <strong>didnt hardcode</strong> values; even in this small project, the bucket name is stored in <code>variables.tf</code></p>
</li>
<li><p>We used <code>depends_on</code> so Terraform knows to create the Public Access Block before applying the bucket policy</p>
</li>
</ul>
<p>These are the exact practices that prevent the mistakes we discussed earlier.</p>
<h3 id="heading-run-these-commands-to-see-everything-in-action"><strong>Run These Commands to See Everything in Action</strong></h3>
<p>Open your terminal inside the <code>terra-mistakes</code> directory and run:</p>
<pre><code class="lang-bash">terraform init          <span class="hljs-comment"># Initialize provider plugins</span>
terraform fmt           <span class="hljs-comment"># Fix indentation, syntax, and formatting</span>
terraform validate      <span class="hljs-comment"># Detect any surface-level misconfigurations</span>
terraform plan          <span class="hljs-comment"># Preview what Terraform will provision</span>
terraform apply --auto-approve   <span class="hljs-comment"># Deploy the S3 website</span>
</code></pre>
<p>Once the resources are created, Terraform will output the <strong>Website URL</strong> from the <code>output.tf</code> file.</p>
<p><strong>Note:</strong><br />This project is deployed in <strong>us-east-1</strong>, and the S3 website endpoint URL is hardcoded for that region.<br />If you want to deploy in another region, update both the <strong>provider</strong> and <strong>output.tf</strong> accordingly.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765033077170/04287995-e618-4b99-99bc-b3d1edea3424.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765033091324/6b2a0a62-00d2-442d-80cd-14e5355cf926.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765033098858/3e866b86-89ed-44c8-8dab-cd71616d361a.png" alt class="image--center mx-auto" /></p>
<p>Now, open the URL in your browser  and boom! 🎉<br />Your portfolio website is live on S3.</p>
<hr />
<h3 id="heading-updating-the-policy-the-right-way"><strong>Updating the Policy the Right Way</strong></h3>
<p>Lets say you want to add <strong>delete object</strong> permission to the bucket policy.<br />The beginner instinct?<br />Head straight to the AWS Console  IAM  update the policy manually.</p>
<p><strong>But we dont do ClickOps here.</strong></p>
<p>Remember: Terraform is your single source of truth.</p>
<p>So instead, update your HCL code inside the bucket policy:</p>
<pre><code class="lang-bash">Statement = [
  {
    Sid       = <span class="hljs-string">"PublicReadGetObject"</span>
    Effect    = <span class="hljs-string">"Allow"</span>
    Principal = <span class="hljs-string">"*"</span>
    Action    = [<span class="hljs-string">"s3:GetObject"</span>, <span class="hljs-string">"s3:DeleteObject"</span>]  <span class="hljs-comment"># Added delete permission</span>
    Resource  = <span class="hljs-string">"<span class="hljs-variable">${aws_s3_bucket.site_bucket.arn}</span>/*"</span>
  }
]
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765033118475/8344ebc0-130c-43a2-8e1d-b74943ca30f9.png" alt class="image--center mx-auto" /></p>
<p>Then apply the changes:</p>
<pre><code class="lang-bash">terraform apply --auto-approve
</code></pre>
<p>Terraform will detect the change, update only the policy, and keep everything consistent  no drift, no confusion.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765033132834/396bd3b2-c438-47ee-8c83-c0a5edfabe4b.png" alt class="image--center mx-auto" /></p>
<p>To cross check the changes, go to S3 Console  Navigate to the bucket and click on Permissions tab and See the Bucket Policy.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765033178315/b006c4d9-5042-4cb0-b24b-21a523e6aaf2.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-cleaning-up"><strong>Cleaning Up</strong></h3>
<p>When you're done experimenting, dont forget to delete everything:</p>
<pre><code class="lang-bash">terraform destroy --auto-approve
</code></pre>
<p>This removes all AWS resources cleanly so you dont get billed unnecessarily.</p>
<hr />
<h2 id="heading-conclusion"><strong>🌟 Conclusion</strong></h2>
<p>Learning Terraform as a beginner can feel overwhelming  not because the tool is hard, but because Infrastructure as Code requires a different mindset. The mistakes we covered today are extremely common, and almost every Terraform practitioner (including me 😅) has made them at some point.</p>
<p>But the good news?</p>
<p>Once you start writing declarative code, using variables wisely, avoiding ClickOps, understanding dependencies, and making <code>fmt</code>, <code>validate</code>, and <code>plan</code> part of your workflow  Terraform becomes a powerful, predictable, and enjoyable tool to work with.</p>
<p>Take your time, practice often, and break things safely.<br />Every small experiment makes you a better DevOps engineer.</p>
<p>If you followed along with the hands-on demo, you now have a static website deployed on AWS using clean, beginner-friendly Terraform. Thats a solid milestone  great job, Dev! 🚀</p>
<p>Feel free to explore more projects, improve your configurations, or even contribute to the repo. And if you ever get stuck, you know where to find me.</p>
<h3 id="heading-connect-with-me"><strong>Connect With Me</strong></h3>
<p>If you enjoyed this blog or learned something new, lets connect:</p>
<ul>
<li><p>🌐 <strong>Website:</strong> <a target="_blank" href="https://praveshsudha.com">https://praveshsudha.com</a></p>
</li>
<li><p>🐦 <strong>Twitter/X:</strong> <a target="_blank" href="https://x.com/praveshstwt">https://x.com/praveshstwt</a></p>
</li>
<li><p>💼 <strong>LinkedIn:</strong> <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha">https://www.linkedin.com/in/pravesh-sudha</a></p>
</li>
<li><p>📺 <strong>YouTube:</strong> <a target="_blank" href="https://www.youtube.com/@pravesh-sudha">https://www.youtube.com/@pravesh-sudha</a></p>
</li>
</ul>
<p>Lets keep building, learning, and automating together! 🚀</p>
]]></description><link>https://blog.praveshsudha.com/dont-touch-terraform-before-avoiding-these-5-rookie-mistakes</link><guid isPermaLink="true">https://blog.praveshsudha.com/dont-touch-terraform-before-avoiding-these-5-rookie-mistakes</guid><category><![CDATA[Devops]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[🚀 Optimising Terraform Performance: Remote Backends, Parallelism & many more]]></title><description><![CDATA[<h2 id="heading-introduction">🌟 Introduction</h2>
<p>Welcome, Devs, to the exciting world of Infrastructure and Cloud computing!</p>
<p>If you have been navigating the cloud landscape recently, you have undoubtedly encountered the industry buzzword: <strong>Infrastructure as Code (IaC)</strong>. Simply put, IaC is the practice of specifying your infrastructure requirements in a code format rather than manually configuring servers and networks. This approach is the bedrock of modern DevOps, ensuring <strong>reliability</strong> and <strong>consistency</strong> across all your environments.</p>
<p>While there is a robust ecosystem of tools offering IaC capabilitiesincluding Pulumi, Chef, Puppet, and AWS CloudFormationtoday we are zeroing in on the undisputed heavyweight of this segment: <strong>Terraform</strong>.</p>
<p>If you are just starting out or want to dive deeper into the core concepts, I have covered a lot of ground regarding Terraform in my previous posts. I highly recommend checking out the full series here to get up to speed:</p>
<p>👉 <a target="_blank" href="https://blog.praveshsudha.com/series/terraform"><strong>Terraform Series</strong></a></p>
<p>In today's guide, we are shifting our focus to <strong>Optimisation</strong>. As your infrastructure grows, so does the complexity and the time required for deployments. We will dive into the best practices to improve the performance of your Terraform configurations, ensuring your infrastructure remains efficient, fast, and secure.</p>
<p>So, without further ado, lets get started!</p>
<hr />
<h2 id="heading-youtube-demonstration">📽 Youtube Demonstration</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/JorEMXgqHWk">https://youtu.be/JorEMXgqHWk</a></div>
<p> </p>
<hr />
<h2 id="heading-pre-requisites">🌟 Pre-Requisites</h2>
<p>Before we jump into the optimisation techniques, let's ensure you have the necessary tools ready to roll.</p>
<p>As usual, for this guide, you will need the following set up on your machine:</p>
<ul>
<li><p><strong>AWS CLI</strong> (Configured with an appropriate IAM user)</p>
</li>
<li><p><strong>Terraform CLI</strong></p>
</li>
<li><p><strong>Docker and Docker-compose</strong></p>
</li>
</ul>
<p>If you don't have these installed yet or aren't sure how to configure them, dont worry! I have walked through the entire process step-by-step in my previous guide. Just follow the instructions there, and you will be good to go:</p>
<p>👉 <a target="_blank" href="https://blog.praveshsudha.com/getting-started-with-terraform-a-beginners-guide#heading-step-1-install-the-aws-cli"><strong>How to Install AWS CLI and Terraform</strong></a></p>
<p>Once you are all set up, proceed to the next section where we start optimizing!</p>
<hr />
<h2 id="heading-how-to-optimise-terraform">🌟 How to Optimise Terraform</h2>
<p>Now that we are set up, let's get into the meat of the matter. Here are <strong>5 Best Practices</strong> to turbocharge your Terraform performance.</p>
<h3 id="heading-1-use-remote-backends-for-state-management">1. Use Remote Backends for State Management</h3>
<p>By default, Terraform stores its "state" (the file that maps your code to real-world resources) locally on your machine (<code>terraform.tfstate</code>). While this works for solo side projects, it kills performance and collaboration in real teams.</p>
<p><strong>The Fix:</strong> Store your Terraform State file in a remote location, preferably <strong>AWS S3</strong>.</p>
<ul>
<li><p><strong>Why?</strong> It prevents conflicts when multiple team members (Dev, Stage, Prod) try to apply changes at the same time.</p>
</li>
<li><p><strong>The Performance Boost:</strong> Moving to a remote backend can improve I/O performance by <strong>10-30%</strong> for large state files.</p>
</li>
<li><p><strong>State Locking:</strong> When configured correctly with State Lock enabled, you ensure that no two people can write to the state simultaneously. (Note: While traditionally done with DynamoDB, ensuring your backend supports locking is key to preventing corruption).</p>
</li>
</ul>
<p><strong>How to do it:</strong></p>
<p>You simply need to create an S3 bucket and reference it in your configuration:</p>
<pre><code class="lang-bash">terraform {
  backend <span class="hljs-string">"s3"</span> {
    bucket       = <span class="hljs-string">"my-terraform-state-bucket"</span>
    key          = <span class="hljs-string">"prod/terraform.tfstate"</span>
    region       = <span class="hljs-string">"us-east-1"</span>
    use_lockfile = <span class="hljs-literal">true</span>
  }
}
</code></pre>
<p>Its that simple!</p>
<h3 id="heading-2-modularise-your-code">2. Modularise Your Code</h3>
<p>Writing one giant <a target="_blank" href="http://main.tf"><code>main.tf</code></a> file is a common rookie mistake. If you are defining a secure EC2 instance, you likely also need a VPC, Security Groups, Subnets, etc. Lumping this all together makes Terraform work harder to calculate dependencies.</p>
<p><strong>The Fix:</strong> Break your code down into <strong>Modules</strong>.</p>
<p>Instead of rewriting the same sub-components for every service, use the <strong>DRY (Don't Repeat Yourself)</strong> principle. Create a module for your network stack, a module for your compute stack, etc.</p>
<ul>
<li><p><strong>Benefit:</strong> This reduces complex dependency graphs.</p>
</li>
<li><p><strong>Speed:</strong> It enables better parallel execution because Terraform can process distinct modules concurrently.</p>
</li>
</ul>
<h3 id="heading-3-unlock-true-parallelism">3. Unlock True Parallelism</h3>
<p>Terraform is capable of walking through the dependency graph and creating non-dependent resources at the same time. However, the default setting is often too conservative.</p>
<p><strong>The Fix:</strong> Adjust the <code>-parallelism</code> flag.</p>
<p>By default, Terraform limits concurrent operations to <strong>10</strong>. For modern systems, this is quite low.</p>
<ul>
<li><p><strong>Guideline:</strong> You can safely increase this to <strong>30-100</strong> for normal development changes.</p>
</li>
<li><p><strong>Constraint:</strong> Don't go <em>too</em> high, or you might hit AWS API rate limits (throttling). Aim for roughly 512MB of system memory per 1,000 resources.</p>
</li>
</ul>
<p><strong>The Result:</strong></p>
<p>Using this flag can cut down plan and apply times significantlypotentially dropping a 3-5 minute operation down to just 30-60 seconds.</p>
<pre><code class="lang-bash">terraform apply -parallelism=30
</code></pre>
<h3 id="heading-4-optimise-provider-configuration">4. Optimise Provider Configuration</h3>
<p>Sometimes, Terraform performs checks that aren't strictly necessary for every single run, especially in non-production environments.</p>
<p><strong>The Fix:</strong> Tune your AWS Provider.</p>
<p>In your Testing, Staging, or Dev environments, you can tell the provider to skip certain validations to save time on API calls. You can also configure retry behaviors to handle network blips gracefully without failing the whole run.</p>
<p><strong>Example Configuration:</strong></p>
<pre><code class="lang-bash">provider <span class="hljs-string">"aws"</span> {
  region                      = <span class="hljs-string">"us-east-1"</span>
  skip_credentials_validation = <span class="hljs-literal">true</span>
  skip_metadata_api_check     = <span class="hljs-literal">true</span>
  max_retries                 = 5
}
</code></pre>
<blockquote>
<p><strong>Note:</strong> While this is a great time-saver for Dev/Test environments, check your organization's policy before using strict validation skipping in Production.</p>
</blockquote>
<h3 id="heading-5-use-resource-targeting">5. Use Resource Targeting</h3>
<p>Imagine you have a massive Terraform setup managing an S3 bucket, its Website Config, ACL rules, and specific objects (like <code>index.html</code>). If you just want to rename the bucket, Terraform might try to refresh the state of <em>every single component</em> related to it.</p>
<p><strong>The Fix:</strong> Use the <code>-target</code> flag.</p>
<p>This allows you to apply changes <em>only</em> to specific resources, ignoring the rest of the infrastructure. It acts like a surgical strike rather than a carpet bombing.</p>
<ul>
<li><p><strong>The Impact:</strong> This limits the operation's scope and prevents the recreation of dependent components that haven't changed.</p>
</li>
<li><p><strong>Efficiency:</strong> For targeted fixes, this can reduce execution time by <strong>85-95%</strong>.</p>
</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-comment"># Only apply changes to the specific bucket, ignoring the rest</span>
terraform apply -target=aws_s3_bucket.my_website_bucket
</code></pre>
<hr />
<h2 id="heading-practical-demonstration">🌟 Practical Demonstration</h2>
<p>Enough with the theory! Let's get our hands dirty with some real Terraform code and the AWS Console.</p>
<p>To demonstrate these optimizations, we are going to deploy a <strong>Request Counter Application</strong> on <strong>AWS Elastic Beanstalk</strong>. This isn't just a "Hello World"; it is a multi-container setup running on an ECS Cluster involving:</p>
<ol>
<li><p><strong>Nginx:</strong> As a reverse proxy and load balancer.</p>
</li>
<li><p><strong>Node.js:</strong> The application server (running two instances).</p>
</li>
<li><p><strong>Redis:</strong> To store the request count data persistently.</p>
</li>
</ol>
<h3 id="heading-step-1-create-your-ecr-repositories">Step 1: Create Your ECR Repositories</h3>
<p>First, we need a place to store our Docker images. We will create two Public Repositories in AWS Elastic Container Registry (ECR)one for our web app and one for our custom Nginx image.</p>
<p>Run the following commands in your terminal:</p>
<pre><code class="lang-bash">aws ecr-public create-repository --repository-name nginx-node-redis-web
aws ecr-public create-repository --repository-name nginx-node-redis-nginx
</code></pre>
<h3 id="heading-step-2-get-the-code">Step 2: Get the Code</h3>
<p>The complete source code for this project is available on my GitHub.</p>
<ol>
<li><p><strong>Fork</strong> the repo.</p>
</li>
<li><p><strong>Clone</strong> it to your local system.</p>
</li>
<li><p>Open it in your code editor (like VS Code).</p>
</li>
</ol>
<p>👉 <a target="_blank" href="https://github.com/Pravesh-Sudha/nginx-node-redis.git"><strong>GitHub Repo: Nginx-Node-Redis Project</strong></a></p>
<h3 id="heading-step-3-test-locally-get-the-vibe">Step 3: Test Locally (Get the Vibe)</h3>
<p>Before we deploy to the cloud, lets make sure it works on your machine. Inside the project directory, run:</p>
<pre><code class="lang-bash">docker-compose up --build
</code></pre>
<p>This will spin up the Nginx, Redis, and two Web containers. Open your browser and go to <code>http://localhost:8080</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764346766913/4340e566-05b3-40e1-9b88-bc4998d56e0f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764346774005/d9c6949d-9215-41d6-8216-a47dde7369e8.png" alt class="image--center mx-auto" /></p>
<p>You should see a nice Request Counter app. Every time you refresh, the counter increments, and you will see the server ID toggle between <code>web1</code> and <code>web2</code> as Nginx load-balances the traffic.</p>
<p><em>Once you are satisfied, press</em> <code>CTRL+C</code> in your terminal to stop the application.</p>
<h3 id="heading-step-4-build-and-push-to-cloud">Step 4: Build and Push to Cloud</h3>
<p>Now, let's move these images to AWS.</p>
<ol>
<li><p>Navigate to the <code>web</code> directory in your terminal.</p>
</li>
<li><p>Go to <strong>AWS Console  ECR  Public Repositories</strong>.</p>
</li>
<li><p>Select <code>nginx-node-redis-web</code> and click the <strong>"View Push Commands"</strong> button.</p>
</li>
<li><p>Execute those commands one by one to build and push your web image.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764346634851/a092f62f-3dbe-4085-8a63-1b6f3be5db6d.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764346651562/8e8a5082-918e-4373-9395-ec0e76823866.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764346614119/b7079d99-104d-485f-947a-93255fb1c831.png" alt class="image--center mx-auto" /></p>
<p><strong>Repeat for Nginx:</strong></p>
<ol>
<li><p>Navigate to the <code>nginx</code> directory.</p>
</li>
<li><p>In the AWS Console, select <code>nginx-node-redis-nginx</code>.</p>
</li>
<li><p>Click <strong>"View Push Commands"</strong> and execute them to push your Nginx image.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764346605794/abc0d71f-16d5-44bb-b137-21f8e33e17dd.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-5-the-terraform-configuration">Step 5: The Terraform Configuration</h3>
<p>Navigate to the <code>terra-performance-config</code> directory. This is where we have applied all the optimizations we discussed earlier:</p>
<ul>
<li><p><code>provider.tf</code>: We are skipping unnecessary validation checks (<code>skip_metadata_api_check</code>, etc.) to speed up the provider.</p>
</li>
<li><p><code>backend.tf</code>: We are using an <strong>S3 bucket</strong> to host our state file remotely and enabling <code>lock_file</code> to prevent write conflicts.</p>
</li>
<li><p><code>main.tf</code>: We are using a Terraform Module for Elastic Beanstalk. It utilizes a <code>Dockerrun.aws.json</code> file (similar to <code>docker-compose</code> but for ECS) which is uploaded to S3 and referenced in the module.</p>
</li>
<li><p><code>dockerrun.aws.json</code>: Skelton of your Docker containers, <strong>make sure to replace the image URI of web1, web2 and Nginx, you can find it in your ECR public Repo.</strong></p>
</li>
<li><p><code>variables.tf</code>: Contains our standard configuration using the default VPC and subnets in <code>us-east-1</code>.</p>
</li>
</ul>
<blockquote>
<p><strong> CRITICAL: Architecture Check (ARM vs AMD)</strong></p>
<p>I built the default Docker images on a <strong>MacBook Air M3 (ARM64 architecture)</strong>.</p>
<p>If you are using <strong>Windows or Linux (Intel/AMD)</strong>, your architecture is likely <strong>AMD64</strong>. You <strong>must</strong> make these two small changes in the code before proceeding, or the deployment will fail:</p>
<ol>
<li><p><strong>In</strong> <code>variables.tf</code>: Change the instance type from <code>t4g.micro</code> (ARM-based) to <code>t3.micro</code> or <code>t3.small</code>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764346533637/39d9d13c-c414-4ba0-a27e-c55d7e6413c6.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>In</strong> <code>Dockerrun.aws.json</code>: Locate the Redis container definition. Change the image from the specific ARM tag to generic: <code>"image": "redis:alpine"</code>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764346541475/c14d78f8-f8f6-48a3-8fb9-0c36c5a0074c.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
</blockquote>
<h3 id="heading-step-6-testing-parallelism">Step 6: Testing Parallelism</h3>
<p>Now for the moment of truth. Let's see how fast we can provision this stack using the <strong>Parallelism</strong> optimization.</p>
<p>Run the following commands:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> terra-performance-config

<span class="hljs-comment"># Create the S3 bucket for our artifacts</span>
aws s3 mb s3://pravesh-ebs-terra-performance-bucket

<span class="hljs-comment"># Initialize Terraform</span>
terraform init

<span class="hljs-comment"># Apply with high parallelism</span>
time terraform apply --auto-approve -parallelism=30
</code></pre>
<p><strong>The Result:</strong> Usually, a multi-container Elastic Beanstalk environment takes <strong>10-30 minutes</strong> to provision. With <code>-parallelism=30</code>, you will likely see this drop to <strong>3-8 minutes</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764346730508/05aeaa30-789e-454a-bfe4-6cf23899716d.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764346736686/02da2095-7f85-4f29-8748-abbc270cbd1f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764346744644/37867d2d-f794-40e4-b290-8ba39843a9d1.png" alt class="image--center mx-auto" /></p>
<p>Once finished, Terraform will output the Elastic Beanstalk URL. Click it to see your live app!</p>
<h3 id="heading-step-7-testing-resource-targeting">Step 7: Testing Resource Targeting</h3>
<p>Finally, let's look at the power of <strong>Resource Targeting</strong>.</p>
<p>We have made a small change to the <code>Dockerrun.aws.json</code> file (e.g., adding an Environment Variable).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764346677686/092664a9-e68a-471f-9fe8-24d6d4c4c147.png" alt class="image--center mx-auto" /></p>
<p>If we ran a normal apply, Terraform might check every single resource. Instead, we will target <em>only</em> the specific components we are updating:</p>
<pre><code class="lang-bash">time terraform apply -parallelism=30 \
  -target=aws_s3_object.dockerrun \
  -target=aws_elastic_beanstalk_application_version.v1 \
  -target=module.elastic-beanstalk-environment \
  -auto-approve
</code></pre>
<p><strong>The Result:</strong> Without targeting, Terraform might attempt to refresh the state of the entire VPC and Security Group structure. With targeting, this update happens in <strong>under a minute</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764346684092/af9bd335-164b-4c07-87c6-515fc3e85391.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-cleanup">Cleanup</h3>
<p>Don't forget to tear down your infrastructure to avoid unexpected AWS bills!</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Destroy Terraform resources</span>
terraform destroy --auto-approve

<span class="hljs-comment"># Clean up S3</span>
aws s3 rm s3://pravesh-ebs-terra-performance-bucket --recursive
aws s3 rb s3://pravesh-ebs-terra-performance-bucket

<span class="hljs-comment"># Delete ECR Repositories</span>
aws ecr-public delete-repository --repository-name nginx-node-redis-web --force
aws ecr-public delete-repository --repository-name nginx-node-redis-nginx --force
</code></pre>
<hr />
<h2 id="heading-conclusion">🌟 Conclusion</h2>
<p>And thats a wrap, folks!</p>
<p>We have successfully journeyed through the landscape of <strong>Terraform Optimization</strong>. We didn't just learn how to write Infrastructure as Code; we learned how to make it <strong>efficient</strong>, <strong>scalable</strong>, and <strong>blazing fast</strong>.</p>
<p>From moving our state to a <strong>Remote Backend</strong> to unlocking the speed of <strong>Parallelism</strong>, and from <strong>Modularising</strong> our logic to performing surgical strikes with <strong>Resource Targeting</strong>you now possess the toolkit to take your DevOps game to the next level.</p>
<p>Remember, optimization isn't just about saving a few minutes here and there. It's about creating a developer experience where feedback loops are short, deployments are reliable, and your infrastructure can scale without becoming a bottleneck.</p>
<p>I hope you enjoyed this deep dive and the hands-on project. Go ahead and apply these techniques to your own infrastructure, and let me know how much time you saved on your next deployment!</p>
<p>If you found this guide helpful or have any questions, feel free to connect with me. I talk about Cloud, DevOps, and everything in between.</p>
<p><strong>🚀 Let's Connect:</strong></p>
<p>💼 <strong>LinkedIn:</strong> <a target="_blank" href="https://linkedin.com/in/pravesh-sudha">linkedin.com/in/pravesh-sudha</a></p>
<p>🐦 <strong>Twitter/X:</strong> <a target="_blank" href="https://x.com/praveshstwt">x.com/praveshstwt</a></p>
<p>📹 <strong>YouTube:</strong> <a target="_blank" href="https://youtube.com/@pravesh-sudha">youtube.com/@pravesh-sudha</a></p>
<p>🌐 <strong>Website:</strong> <a target="_blank" href="https://blog.praveshsudha.com/">blog.praveshsudha.com</a></p>
<p>Happy Coding and Happy Clouding! 🚀</p>
]]></description><link>https://blog.praveshsudha.com/optimising-terraform-performance-remote-backends-parallelism-and-many-more</link><guid isPermaLink="true">https://blog.praveshsudha.com/optimising-terraform-performance-remote-backends-parallelism-and-many-more</guid><category><![CDATA[Devops]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Elastic Beanstalk]]></category><category><![CDATA[Elastic Container Service]]></category><category><![CDATA[ECS]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[🚀 Terraform Workspaces and Multi-Environment Deployments]]></title><description><![CDATA[<h2 id="heading-introduction">💡 Introduction</h2>
<p>Welcome to the world of <strong>Cloud and Automation</strong>, Devs!<br />Today, were going to explore one of the most powerful and widely used Infrastructure-as-Code (IaC) tools  <strong>Terraform</strong>. In this guide, well learn how to use Terraform <strong>workspaces</strong> to manage multiple environments seamlessly  from <strong>Development</strong> to <strong>Staging</strong> and finally <strong>Production</strong>.</p>
<p>By the end of this blog, youll not only understand how Terraform organizes infrastructure across environments but also see it in action through a <strong>hands-on demonstration</strong>  deploying a static website on <strong>Amazon S3</strong> with three isolated environments.</p>
<p>So, without further ado, lets dive in and uncover how Terraform simplifies multi-environment deployments in the cloud.</p>
<hr />
<h2 id="heading-youtube-demonstration">📽 Youtube Demonstration</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/W0x42D34OMw">https://youtu.be/W0x42D34OMw</a></div>
<p> </p>
<hr />
<h2 id="heading-prerequisites">💡 Prerequisites</h2>
<p>Before we jump into the implementation, lets make sure your setup is ready. Youll need the following tools installed and configured on your system:</p>
<ul>
<li><p><strong>AWS CLI</strong>  Installed and configured with an IAM user that has full access to <strong>Amazon S3</strong>. If you dont know how to do that, Follow <a target="_blank" href="https://blog.praveshsudha.com/getting-started-with-terraform-a-beginners-guide#heading-step-1-install-the-aws-cli">these steps 1-3 from my Terraform Starter blog.</a> (Just change the Permissions from EC2 to S3FullAccess)</p>
</li>
<li><p><strong>Terraform CLI</strong>  Installed and ready to execute Terraform commands.</p>
</li>
</ul>
<p>Once these are in place, youll have everything you need to follow along with the tutorial and deploy your multi-environment infrastructure.</p>
<hr />
<h2 id="heading-what-is-terraform-workspace">💡 What is Terraform Workspace?</h2>
<p>When managing infrastructure with <strong>Terraform</strong>, its common to work with multiple environments such as <strong>Development</strong>, <strong>Staging</strong>, and <strong>Production</strong>. Each of these environments often requires its own set of resources and configurations. To keep things organized and maintain a clean infrastructure codebase, Terraform provides a powerful feature called <strong>Workspaces</strong>.</p>
<p>A <strong>Terraform Workspace</strong> allows you to create and manage <strong>separate environments within a single Terraform configuration</strong>. Each workspace is associated with its own <strong>state file</strong>, which means the resources for one environment are isolated from another, even though they share the same configuration files. This makes it much easier to manage multiple deployments  all from a single codebase.</p>
<p>When you initialize Terraform for the first time, a default workspace named <strong>default</strong> is automatically created. Any infrastructure you create without explicitly switching workspaces will live in this default environment. You can then create new workspaces for different environments as needed.</p>
<p>Here are the available Terraform workspace commands:</p>
<pre><code class="lang-bash">terraform workspace --<span class="hljs-built_in">help</span>

Usage: terraform [global options] workspace
  new, list, show, select and delete Terraform workspaces.

Subcommands:
    delete    Delete a workspace
    list      List workspaces
    new       Create a new workspace
    select    Select a workspace
    show      Show the name of the current workspace
</code></pre>
<p>Each workspace must have a <strong>unique name</strong>. When you switch between them, Terraform automatically updates the <strong>state file</strong> to match the selected workspace  ensuring your deployments remain consistent and isolated per environment.</p>
<hr />
<h2 id="heading-pros-and-cons-of-using-terraform-workspaces">💡 Pros and Cons of Using Terraform Workspaces</h2>
<p>Before deciding to use Terraform Workspaces for managing your environments, its important to understand both their advantages and limitations. While they provide a convenient way to organize infrastructure, they may not always be the best fit for every use case  especially in large-scale or complex deployments.</p>
<h4 id="heading-pros"> Pros</h4>
<ul>
<li><p><strong>Single Configuration for Multiple Environments</strong><br />  You can manage multiple environments  like <strong>Dev</strong>, <strong>Stage</strong>, and <strong>Prod</strong>  using a single Terraform configuration. This reduces code duplication and keeps your setup clean and maintainable.</p>
</li>
<li><p><strong>Easy Environment Switching</strong><br />  Workspaces come with a simple built-in command to switch between environments:</p>
<pre><code class="lang-bash">  terraform workspace select &lt;name&gt;
</code></pre>
<p>  This makes moving between environments effortless and consistent.</p>
</li>
<li><p><strong>Simplifies Non-Production Environment Creation</strong><br />  You can easily spin up <strong>non-production environments</strong> such as <strong>Development</strong>, <strong>QA</strong>, <strong>Beta</strong>, or <strong>UAT</strong> that mirror your production setup  often as smaller, scaled-down versions.</p>
</li>
<li><p><strong>Resource and Variable Isolation</strong><br />  Each workspace maintains its own <strong>state file</strong> and can have environment-specific <strong>variables</strong>, reducing the chance of misconfigurations or accidental resource overlap.</p>
</li>
<li><p><strong>Ideal for Small to Medium Projects</strong><br />  For small teams or projects where environments share similar configurations, workspaces provide just the right balance of simplicity and control.</p>
</li>
</ul>
<h4 id="heading-cons"> Cons</h4>
<ul>
<li><p><strong>Added Complexity for Large-Scale Infrastructure</strong><br />  As projects grow, managing multiple environments and configurations through workspaces can become cumbersome and harder to maintain.</p>
</li>
<li><p><strong>Not Fully Isolated</strong><br />  While each workspace has its own state file, they still share the same backend configuration. Without proper management, this can lead to <strong>state conflicts or accidental cross-environment changes</strong>.</p>
</li>
<li><p><strong>Limited for Advanced Use Cases</strong><br />  Workspaces arent ideal for <strong>multi-provider</strong> setups or situations where resources need to be shared across environments. In those cases, using <strong>separate directories or repositories</strong> for each environment is often a better approach.</p>
</li>
</ul>
<hr />
<h2 id="heading-practical-demonstration">💡 Practical Demonstration</h2>
<p>Now that weve covered the theory, its time to get our hands dirty with a real-world example.<br />The project well be working on demonstrates how to use <strong>Terraform Workspaces</strong> to deploy a <strong>static website</strong> on <strong>AWS S3</strong>for multiple environments  Dev, Stage, and Prod.</p>
<p>You can find the complete code for this project on my GitHub repository:<br />👉 <a target="_blank" href="https://github.com/Pravesh-Sudha/terra-projects/">GitHub  terraform-workspace-s3</a></p>
<p>Navigate to the <code>terraform-workspace-s3</code> directory, and youll see the following files and folders:</p>
<ul>
<li><p><code>provider.tf</code>  Defines AWS as the cloud provider and specifies the region (<code>us-east-1</code>).</p>
</li>
<li><p><code>output.tf</code>  Prints the website URL once deployment is complete. The output changes based on the environment.</p>
</li>
<li><p><code>main.tf</code>  The core of the project. It:</p>
<ul>
<li><p>Creates an S3 bucket named <code>pravesh-{env}-terraform-workspace-site</code>.</p>
</li>
<li><p>Configures it as a <strong>static website</strong>.</p>
</li>
<li><p>Uploads two HTML objects: <code>index.html</code> and <code>error.html</code>.</p>
</li>
<li><p>Includes configurations for <strong>ownership controls</strong>, <strong>public access</strong>, and <strong>bucket policies</strong>.</p>
</li>
</ul>
</li>
<li><p><code>index/</code>  Contains three subdirectories: <code>dev</code>, <code>stage</code>, and <code>prod</code>.<br />  Each folder has its own <code>index.html</code> file with environment-specific content.</p>
</li>
<li><p><code>create.sh</code>  A shell script that automates workspace creation and resource deployment.</p>
</li>
<li><p><code>delete.sh</code>  A cleanup script that destroys all created resources to prevent unnecessary AWS charges.</p>
</li>
</ul>
<p>Inside the <code>main.tf</code>, weve set <code>locals.env</code> equal to the current workspace name. This dynamic link ensures Terraform automatically detects the environment (Dev, Stage, or Prod) and applies the corresponding configuration.</p>
<p>Once youve cloned the repository locally, open your terminal and run the following commands:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> terra-projects/terraform-workspace-s3
chmod u+x create.sh
./create.sh
</code></pre>
<p>This script will:</p>
<ol>
<li><p>Create all the required Terraform workspaces.</p>
</li>
<li><p>Apply the configuration and deploy the static website for each environment.</p>
</li>
</ol>
<p>If you prefer, you can also run the Terraform commands manually instead of using the script.</p>
<p><em>(I ran it manually while testing, Here are some Screenshots)</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762414872519/4aa647cd-a8b6-407c-a90c-c8f371ba255f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762414879155/db277c85-8174-4d3b-8937-4808d0bfb597.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762414889142/d6f43104-1c18-40a3-ba9d-c6c2ca610263.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762414893531/39bf8c3d-2d6f-4179-ae6e-7bfd630a7e87.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762415060585/77298bd3-5939-40cc-b169-bc73e687995d.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762414897639/d7d426bd-0f89-49bc-85ec-ca6997ba01d9.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762414901534/c0105579-212c-4837-bd3c-0347d1e5d3e0.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762415068134/c125a1f5-bf5b-4921-be34-c346f4c04262.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762415073694/7e1559e8-ce83-4dd8-a896-eb099b85c328.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762415077883/2b4d6919-bacb-4212-b08b-cb766a5d5a12.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762415082597/22923b56-7dd0-437e-83ba-2dc22abbd0b8.png" alt class="image--center mx-auto" /></p>
<p>After the deployment completes successfully, Terraform will output a <strong>website URL</strong> for each environment.<br />Open the URL in your browser  and youll see your static website live, hosted directly from your S3 bucket!</p>
<p>When youre done testing, make sure to delete the resources to avoid unwanted charges.<br />You can do this easily by running the cleanup script:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> terra-projects/terraform-workspace-s3
chmod u+x delete.sh
./delete.sh
</code></pre>
<p>This will remove all S3 buckets, associated objects, and delete the Terraform workspaces you created.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762415169439/1ee4ace7-7b86-46ac-bc2c-9dc014922b9f.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-conclusion">🧩 <strong>Conclusion</strong></h2>
<p>As we wrap up this project, weve seen how <strong>Terraform Workspaces</strong> simplify managing multiple environments like <strong>Dev</strong>, <strong>Stage</strong>, and <strong>Prod</strong>  all from a single configuration.<br />Instead of maintaining separate folders or repos, workspaces help isolate state files while keeping the infrastructure code consistent and scalable.</p>
<p>We also explored how to host a static website on <strong>AWS S3</strong> using Terraform, learned about configuring ownership, access policies, and automating deployment across environments.</p>
<p>While Workspaces are great for small to medium-sized projects, they do have some limitations for larger infrastructure setups. Still, theyre a great way to get started with multi-environment IaC and understand how Terraform handles isolation through state management.</p>
<p>If youve followed along till the end, congratulations  youve just taken a big step toward mastering <strong>Infrastructure as Code</strong> with Terraform!</p>
<h3 id="heading-connect-with-me"><strong>🌐 Connect with Me</strong></h3>
<p>If you enjoyed this guide, share it with your DevOps buddies and stay tuned for more such projects!<br />You can also find me sharing tech content, tutorials, and behind-the-scenes DevOps experiments here 👇</p>
<ul>
<li><p>💼 <strong>LinkedIn:</strong> <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha"><strong>linkedin.com/in/pravesh-sudha</strong></a></p>
</li>
<li><p>🐦 <strong>Twitter/X:</strong> <a target="_blank" href="https://x.com/praveshstwt"><strong>x.com/praveshstwt</strong></a></p>
</li>
<li><p>📹 <strong>YouTube:</strong> <a target="_blank" href="https://www.youtube.com/@pravesh-sudha"><strong>youtube.com/@pravesh-sudha</strong></a></p>
</li>
<li><p>🌐 <strong>Website:</strong> <a target="_blank" href="https://blog.praveshsudha.com/"><strong>blog.praveshsudha.com</strong></a></p>
</li>
</ul>
]]></description><link>https://blog.praveshsudha.com/terraform-workspaces-and-multi-environment-deployments</link><guid isPermaLink="true">https://blog.praveshsudha.com/terraform-workspaces-and-multi-environment-deployments</guid><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[coding]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[Terraform Meets Ansible: Automating Multi-Environment Infrastructure on AWS]]></title><description><![CDATA[<h2 id="heading-introduction">🚀 Introduction</h2>
<p>Welcome Devs, to the world of <strong>Cloud and Code</strong> 💻</p>
<p>Today, Ive got something really exciting for you. Were going to integrate <strong>Terraform</strong> with <strong>Ansible</strong> to showcase the true power of <strong>Infrastructure as Code (IaC)</strong> and <strong>Configuration Management</strong>  all <strong>fully automated</strong> with a <strong>multi-environment setup</strong>.</p>
<p>This setup will give you a real-world glimpse of how modern DevOps projects operate with environments like <strong>Dev</strong>, <strong>Stage</strong>, and <strong>Prod</strong>, and how tools like <strong>Terraform</strong> and <strong>Ansible</strong> work together  one handling infrastructure provisioning, and the other managing configuration.</p>
<p>So without further ado, lets dive in and start building 🚀</p>
<hr />
<h2 id="heading-youtube-demonstration">📽 Youtube Demonstration</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/tKlGTGye_hk">https://youtu.be/tKlGTGye_hk</a></div>
<p> </p>
<hr />
<h2 id="heading-pre-requisites"> Pre-Requisites</h2>
<p>Before we jump into the setup, make sure you have the following requirements ready on your system 👇</p>
<ul>
<li><p><strong>AWS CLI</strong> installed and configured with an <strong>IAM user</strong> that has full access to <strong>EC2</strong> and <strong>VPC</strong>.</p>
<blockquote>
<p>🧭 If youre new to this part  dont worry! Ive already covered it in one of my earlier blogs:<br />👉 <a target="_blank" href="https://blog.praveshsudha.com/getting-started-with-terraform-a-beginners-guide#heading-step-1-install-the-aws-cli">Getting Started with Terraform  A Beginners Guide</a><br />(Follow <strong>Step 1 to 3</strong>  itll help you install AWS CLI, configure your IAM user, and set up Terraform CLI as well.)</p>
</blockquote>
</li>
<li><p><strong>Python</strong> and <strong>Ansible</strong> installed on your system.</p>
<blockquote>
<p>📘 You can check out Ansibles official installation guide here:<br />👉 <a target="_blank" href="https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#pip-install">Install Ansible with pip</a></p>
</blockquote>
</li>
</ul>
<p>Once thats all set up, were good to go and start building our project 🚀</p>
<hr />
<h2 id="heading-getting-started">🚀 Getting Started</h2>
<p>Alright Devs, now that weve set up the basics, lets dive into the real deal.</p>
<p>This project is all about provisioning infrastructure for a <strong>real-world multi-environment setup</strong> and then <strong>configuring it using Ansible</strong>  just like its done in production systems.</p>
<p>You can find the complete project code here:<br />👉 <a target="_blank" href="https://github.com/Pravesh-Sudha/terra-projects.git">Terra-Projects Repository</a><br />The directory for this particular setup is <a target="_blank" href="https://github.com/Pravesh-Sudha/terra-projects/tree/main/terra-ansible-starter"><code>terra-ansible-starter</code></a>.</p>
<p>Now, lets break down how the project actually works 👇</p>
<h3 id="heading-1-terra-config-directory">🧱 1. <code>terra-config</code> Directory</h3>
<p>This directory contains the following files:</p>
<ul>
<li><p><code>provider.tf</code></p>
</li>
<li><p><code>variable.tf</code></p>
</li>
<li><p><code>output.tf</code></p>
</li>
<li><p><code>main.tf</code></p>
</li>
</ul>
<p>Heres what happens inside:<br />Were creating a <strong>key pair</strong> named <code>tester-key</code>, and a <strong>security group</strong> with three rules  allowing:</p>
<ul>
<li><p>Outbound traffic on ports <strong>80</strong> and <strong>22</strong></p>
</li>
<li><p>Inbound traffic <strong>from everywhere</strong></p>
</li>
</ul>
<p>Then, using a <strong>for loop</strong>, we spin up <strong>6 EC2 instances</strong>  two for each environment:</p>
<ul>
<li><p><code>dev</code></p>
</li>
<li><p><code>stage</code></p>
</li>
<li><p><code>prod</code></p>
</li>
</ul>
<p>The <code>output.tf</code> file gives us the <strong>public IPs</strong> of all these instances, neatly categorized per environment.</p>
<h3 id="heading-2-scripts-directory">🧮 2. <code>scripts</code> Directory</h3>
<p>Inside this folder, we have a Python script named <code>generate_inv.py</code>.<br />Heres what it does:</p>
<ul>
<li><p>It reads the <code>output.json</code> file (generated after running <code>terraform output</code> command).</p>
</li>
<li><p>Then, it dynamically creates a <code>hosts.ini</code> file inside the <strong>Ansible inventory</strong> directory.</p>
</li>
</ul>
<p>This makes the integration between Terraform and Ansible <strong>completely automated</strong>  no manual IP editing required!</p>
<h3 id="heading-3-ansible-directory"> 3. <code>ansible</code> Directory</h3>
<p>Here lies our <strong>configuration magic</strong> </p>
<ul>
<li><p>Inside this folder, theres a <code>playbook.yml</code> file which defines a <strong>role</strong> called <code>webserver</code>.</p>
</li>
<li><p>The <code>roles/webserver/tasks/main.yml</code> file includes all configuration steps for the servers:</p>
<ul>
<li><p>Install <strong>NGINX</strong></p>
</li>
<li><p>Copy the <code>index-{env}.html</code> file (specific to each environment)</p>
</li>
<li><p>Restart the NGINX server</p>
</li>
</ul>
</li>
</ul>
<p>Theres also a <code>files</code> directory that contains separate <code>index.html</code> files for each environment (<code>dev</code>, <code>stage</code>, <code>prod</code>).</p>
<p>The <strong>inventory</strong> folder holds the <code>hosts.ini</code> file, which specifies the public IPs of instances  and yes, its dynamically created using our Python script.</p>
<hr />
<h2 id="heading-connecting-it-all-together">🔗 Connecting It All Together</h2>
<p>That was an eagle-eye view of the project. Now lets zoom in and see <strong>how everything connects together</strong>.</p>
<p>At the heart of this automation lies the <code>deploy.sh</code> script  the glue that ties Terraform and Ansible into one seamless workflow.</p>
<p>Heres what happens step by step 👇</p>
<h3 id="heading-step-1-provision-infrastructure-with-terraform">🧩 Step 1: Provision Infrastructure with Terraform</h3>
<p>First, the script navigates inside the <code>terra-config</code> directory and runs:</p>
<pre><code class="lang-bash">terraform init
terraform apply -auto-approve
</code></pre>
<p>This initializes Terraform and provisions the infrastructure across all three environments  <strong>Dev</strong>, <strong>Stage</strong>, and <strong>Prod</strong>.</p>
<p>Once the resources are created, it generates an <code>output.json</code> file that stores the <strong>public IPs</strong> of every instance in JSON format  categorized neatly by environment.</p>
<h3 id="heading-step-2-generate-dynamic-inventory-with-python">🐍 Step 2: Generate Dynamic Inventory with Python</h3>
<p>Next, the script moves back to the main project directory and executes the <code>generate_</code><a target="_blank" href="http://inv.py"><code>inv.py</code></a> script.</p>
<p>This Python script:</p>
<ul>
<li><p>Reads the <code>output.json</code> file created by Terraform</p>
</li>
<li><p>Formats it properly to generate an <code>Ansible hosts.ini</code> file</p>
</li>
<li><p>Appends it with other essential details such as the <strong>SSH key path</strong> for authentication</p>
</li>
</ul>
<p>A quick <code>sleep 15</code> command ensures the EC2 instances finish their health checks before Ansible jumps in for configuration.</p>
<h3 id="heading-step-3-configure-instances-with-ansible"> Step 3: Configure Instances with Ansible</h3>
<p>Finally, once our <code>hosts.ini</code> file is ready, the script triggers the <strong>Ansible playbook</strong> command:</p>
<pre><code class="lang-bash">ansible-playbook playbook.yml
</code></pre>
<p>This command configures all <strong>six EC2 instances</strong> automatically  each according to its environment.<br />Every environment (Dev, Stage, Prod) gets its own <strong>custom</strong> <code>index.html</code> file served through <strong>NGINX</strong>.</p>
<hr />
<h2 id="heading-enough-talk-lets-get-our-hands-dirty">🧠 Enough Talk  Lets Get Our Hands Dirty!</h2>
<p>Alright, enough with the theory  its time to <strong>get practical</strong> and actually see this automation in action 🔥</p>
<p>Before we launch the setup, we need an SSH key that Ansible will use to authenticate into our EC2 instances.</p>
<p>Run the following command to generate one:</p>
<pre><code class="lang-bash">ssh-keygen -t rsa -b 4096 -f ~/.ssh/appKey
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761671154275/53017044-2a19-4969-ac50-298d0c693888.png" alt class="image--center mx-auto" /></p>
<p>This command creates a secure SSH key pair named <code>appKey</code> inside your <code>~/.ssh</code> directory.<br />Well use this private key for all our six EC2 instances  <strong>two for each environment</strong> (Dev, Stage, and Prod).</p>
<h3 id="heading-lets-deploy-everything">🚀 Lets Deploy Everything</h3>
<p>Now, since were DevOps engineers (and we hate doing things manually 😎), well use the <a target="_blank" href="http://deploy.sh"><code>deploy.sh</code></a> script to automate everything  from provisioning infrastructure to configuring servers.</p>
<p>Make sure the script has execution permissions, then run it:</p>
<pre><code class="lang-bash">chmod u+x deploy.sh
./deploy.sh
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761671168331/97d57cd5-cf8a-47a9-bcf8-cb0c5bf2fe14.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761671174387/422a8c6c-def9-4f15-8ddb-459d7e6a0b56.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761671191167/8a6be595-d710-4aa3-9622-8aae1377604a.png" alt class="image--center mx-auto" /></p>
<p>Sit back, grab a coffee , and <strong>watch the logs</strong> as the magic unfolds.<br />Terraform will create your instances, generate the output file, the Python script will prepare your inventory, and Ansible will jump in to configure your NGINX servers  all in one go.</p>
<h3 id="heading-test-the-setup">🌐 Test the Setup</h3>
<p>Once the process completes, visit the <strong>public IPs</strong> of your EC2 instances in your browser.<br />Youll see different <code>index.html</code> pages for each environment  Dev, Stage, and Prod  each showcasing the environment name and design differences.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761671225867/26cd7e66-804c-4f96-b31e-01f37361fb25.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761671234062/5faee24c-f271-4411-a8b8-24ec2f64de93.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761671247321/4b9b107b-cd4c-4e50-9b18-572066341158.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-clean-up">🧹 Clean Up</h3>
<p>Once youre done testing, dont forget to <strong>destroy your infrastructure</strong>  AWS isnt running on free coffee beans 😅</p>
<p>Run the following command to safely tear everything down:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> terra-ansible-starter/terra-config
terraform destroy --auto-approve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761671259859/d3cc1713-c6b6-42a6-bd3b-a0a4727e3aa6.png" alt class="image--center mx-auto" /></p>
<p>This will clean up all EC2 instances, security groups, and key pairs  saving you from unwanted AWS charges 💸</p>
<hr />
<h2 id="heading-conclusion">🎯 Conclusion</h2>
<p>And thats a wrap, folks! 🥳</p>
<p>We just walked through a <strong>complete automation pipeline</strong> where Terraform handled <strong>infrastructure provisioning</strong>, and Ansible took care of <strong>server configuration</strong>  all in a <strong>multi-environment setup</strong>.<br />This hands-on project gives you a solid understanding of how <strong>real-world DevOps workflows</strong> look when code, automation, and infrastructure come together.</p>
<p>By combining these two tools  Terraform and Ansible  youve essentially built a <strong>foundation for scalable, environment-aware deployments</strong>.<br />Whether its deploying static sites, configuring app servers, or scaling microservices, the same workflow logic applies  automate, version-control, and manage everything as code.</p>
<p>If you followed along till here, youve not just learned two tools <br />youve built the mindset of a DevOps engineer who thinks in systems and automates for efficiency </p>
<p>Keep exploring, keep experimenting  and as always, <strong>build, break, and learn</strong> 💪</p>
<hr />
<h3 id="heading-connect-with-me">🌐 Connect with Me</h3>
<p>If you enjoyed this guide, share it with your DevOps buddies and stay tuned for more such projects!<br />You can also find me sharing tech content, tutorials, and behind-the-scenes DevOps experiments here 👇</p>
<ul>
<li><p>💼 <strong>LinkedIn:</strong> <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha">linkedin.com/in/pravesh-sudha</a></p>
</li>
<li><p>🐦 <strong>Twitter/X:</strong> <a target="_blank" href="https://x.com/praveshstwt">x.com/praveshstwt</a></p>
</li>
<li><p>📹 <strong>YouTube:</strong> <a target="_blank" href="https://www.youtube.com/@pravesh-sudha">youtube.com/@pravesh-sudha</a></p>
</li>
<li><p>🌐 <strong>Website:</strong> <a target="_blank" href="https://blog.praveshsudha.com/">blog.praveshsudha.com</a></p>
</li>
</ul>
]]></description><link>https://blog.praveshsudha.com/terraform-meets-ansible-automating-multi-environment-infrastructure-on-aws</link><guid isPermaLink="true">https://blog.praveshsudha.com/terraform-meets-ansible-automating-multi-environment-infrastructure-on-aws</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[ansible]]></category><category><![CDATA[nginx]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[🚀 Deploying Cognee AI Starter App on AWS ECS Using Terraform]]></title><description><![CDATA[<h2 id="heading-introduction">🧭 Introduction</h2>
<p>Welcome Devs to the world of <strong>AI</strong> and <strong>Automation</strong> 👋</p>
<p>Today, were diving into an exciting hands-on project where <strong>infrastructure meets intelligence</strong>. Well explore <strong>Cognee AI</strong> a memory layer for LLMs that lets applications remember, retrieve, and build on prior context  and see how it works in action through the <strong>Cognee Starter App</strong>, built with Flask.</p>
<p>But were not stopping there. Once the app is ready, well <strong>deploy it to AWS ECS (Fargate)</strong> using <strong>Terraform</strong>, bringing the power of <strong>Infrastructure as Code (IaC)</strong> to streamline and automate the entire deployment process.</p>
<p>By the end of this guide, youll:</p>
<ul>
<li><p>🧠 Get familiar with Cognee AI and its role as a memory layer for LLMs.</p>
</li>
<li><p>🐳 Containerise and prepare a Flask application for production.</p>
</li>
<li><p> Provision AWS infrastructure using Terraform.</p>
</li>
<li><p>🚀 Deploy the app seamlessly on AWS ECS with Fargate.</p>
</li>
</ul>
<p>So without further ado, lets get started and build something awesome!</p>
<hr />
<h2 id="heading-youtube-demonstation">📽 Youtube Demonstation</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/uvkwXSUJ6Hw">https://youtu.be/uvkwXSUJ6Hw</a></div>
<p> </p>
<hr />
<h2 id="heading-pre-requisites">🧰 Pre-Requisites</h2>
<p>Before we roll up our sleeves and dive into the deployment, lets make sure your local environment is ready. Having the right tools installed will make the process smooth and error-free.</p>
<p>Heres what youll need on your system:</p>
<ul>
<li><p>🪝 <strong>AWS CLI configured with an IAM user</strong> that has <strong>ECS</strong>, <strong>VPC</strong>, and <strong>IAM Full Access</strong> permissions.<br />  👉 If youre not familiar with this setup, check out my detailed step-by-step guide here: <a target="_blank" href="https://blog.praveshsudha.com/learn-how-to-deploy-a-three-tier-application-on-aws-eks-using-terraform-with-best-practices#heading-step-1-set-up-aws-cli-and-iam-user">Learn How to Deploy a Three-Tier Application on AWS EKS Using Terraform</a>.</p>
</li>
<li><p>🐍 <strong>Python</strong>  since Cognee Starter runs on Flask.</p>
</li>
<li><p>🧱 <strong>Terraform CLI</strong>  the star of this blog, which well use to provision and manage our AWS infrastructure.</p>
</li>
</ul>
<p>Once youve checked these boxes , youre all set to move to the fun part  building and deploying our application!</p>
<hr />
<h2 id="heading-understanding-cognee-ai">🧠 Understanding Cognee AI</h2>
<p>Before jumping into infrastructure, lets take a moment to understand what <strong>Cognee AI</strong> actually does  and why its important.</p>
<p>In simple terms: <strong>Cognee organizes your data into AI memory.</strong></p>
<p>When you make a call to a Large Language Model (LLM), the interaction is <strong>stateless</strong>  meaning it doesnt remember what happened during previous calls or have access to your broader document context. This makes it difficult to build real-world applications that require context retention, document linking, or knowledge continuity.</p>
<p>Thats where <strong>Cognee AI</strong> comes in. It acts as a <strong>memory layer for LLMs</strong>, allowing you to:</p>
<ul>
<li><p><strong>Link documents</strong> and data sources together.</p>
</li>
<li><p><strong>Maintain context</strong> across multiple LLM calls.</p>
</li>
<li><p><strong>Create richer, more intelligent applications</strong> that can reason over previous interactions.</p>
</li>
</ul>
<p>In this project, well be using the <strong>Cognee Starter App</strong>, which gives a hands-on introduction to this powerful memory layer  and then well deploy it on <strong>AWS ECS (Fargate)</strong> to make it production-ready.</p>
<hr />
<h2 id="heading-how-cognee-works">🧭 How Cognee Works</h2>
<p>Cognee isnt just about storing data  its about <strong>understanding</strong> and <strong>structuring</strong> it so LLMs can use it intelligently. When it comes to your data, Cognee knows what matters.</p>
<p>There are <strong>four key operations</strong> that power the Cognee memory layer:</p>
<ul>
<li><p><code>.add</code>  Prepare for Cognification<br />  This is the starting point. You send your data asynchronously, and Cognee cleans, processes, and prepares it for the memory layer.</p>
</li>
<li><p><code>.cognify</code>  Build a Knowledge Graph with Embeddings<br />  Cognee splits your documents into chunks, extracts entities and relations, and links everything into a <strong>queryable knowledge graph</strong>  the core of its memory layer.</p>
</li>
<li><p><code>.search</code>  Query with Context<br />  When you search, Cognee combines <strong>vector similarity</strong> with <strong>graph traversal</strong>. Depending on the mode, it can fetch raw nodes, explore relationships, or even generate natural language answers using RAG (Retrieval-Augmented Generation). It ensures the <strong>right context</strong> is always delivered to the LLM.</p>
</li>
<li><p><code>.memify</code>  Semantic Enrichment of the Graph <em>(coming soon)</em><br />  This will enhance the knowledge graph with <strong>deeper semantic understanding</strong>, adding richer contextual relationships.</p>
</li>
</ul>
<p>In our <strong>hands-on demonstration</strong>, well use the first three methods  <code>.add</code>, <code>.cognify</code>, and <code>.search</code>  to see how Cognee works in action before deploying the app on AWS ECS.</p>
<hr />
<h2 id="heading-a-small-demo-to-understand-cognee-functions">🧪 A Small Demo to Understand Cognee Functions</h2>
<p>Before we jump into deploying our Flask application on AWS, lets take a few minutes to understand how <strong>Cognee AI</strong> works practically.</p>
<p>Ive already hosted the project code on GitHub 👇<br />👉 <a target="_blank" href="https://github.com/Pravesh-Sudha/terra-projects/">GitHub Repository  terra-projects</a></p>
<p>Head inside the <code>cognee-flask</code> directory, and youll find the entire project structure there.</p>
<h3 id="heading-step-1-set-up-environment-variables">🧰 Step 1: Set up Environment Variables</h3>
<p>Inside the project folder, create a <code>.env</code> file. You can refer to the <code>.env.example</code> file for the format.</p>
<p>Youll need a <strong>Gemini API Key</strong>, which is <strong>free</strong> to get. Paste it in your <code>.env</code> file like this:</p>
<pre><code class="lang-python">LLM_PROVIDER=<span class="hljs-string">"gemini"</span>
LLM_MODEL=<span class="hljs-string">"gemini/gemini-2.5-flash"</span>
LLM_API_KEY=<span class="hljs-string">"&lt;your-gemini-key&gt;"</span>

<span class="hljs-comment"># Embeddings</span>
EMBEDDING_PROVIDER=<span class="hljs-string">"gemini"</span>
EMBEDDING_MODEL=<span class="hljs-string">"gemini/text-embedding-004"</span>
EMBEDDING_DIMENSIONS=<span class="hljs-string">"768"</span>
EMBEDDING_API_KEY=<span class="hljs-string">"&lt;your-gemini-key&gt;"</span>
</code></pre>
<p>👉 If youre using a different LLM provider, follow this guide to configure it properly: <a target="_blank" href="https://docs.cognee.ai/getting-started/installation#setup">Cognee Docs  Installation &amp; Setup</a></p>
<h3 id="heading-step-2-install-dependencies"> Step 2: Install Dependencies</h3>
<p>To install all the required dependencies, run:</p>
<pre><code class="lang-bash">uv sync
</code></pre>
<p>This will set up everything you need to run the demo locally.</p>
<h3 id="heading-step-3-understanding-testingcogneepy">🧠 Step 3: Understanding <code>testing_cognee.py</code></h3>
<p>Now, open the <code>testing_cognee.py</code> file. Heres what it looks like:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> cognee <span class="hljs-keyword">import</span> SearchType, visualize_graph
<span class="hljs-keyword">import</span> cognee
<span class="hljs-keyword">import</span> asyncio
<span class="hljs-keyword">import</span> os, pathlib

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">main</span>():</span>
    <span class="hljs-comment"># Create a clean slate for Cognee -- reset data and system state</span>
    <span class="hljs-keyword">await</span> cognee.prune.prune_data()
    <span class="hljs-keyword">await</span> cognee.prune.prune_system(metadata=<span class="hljs-literal">True</span>)

    <span class="hljs-comment"># Add sample content</span>
    text = <span class="hljs-string">"Cognee turns documents into AI memory."</span>
    <span class="hljs-keyword">await</span> cognee.add(text)

    <span class="hljs-comment"># Process with LLMs to build the knowledge graph</span>
    <span class="hljs-keyword">await</span> cognee.cognify()

    graph_file_path = str(
        pathlib.Path(
            os.path.join(pathlib.Path(__file__).parent, <span class="hljs-string">".artifacts/graph_visualization.html"</span>)
        ).resolve()
    )
    <span class="hljs-keyword">await</span> visualize_graph(graph_file_path)

    <span class="hljs-comment"># Search the knowledge graph</span>
    graph_result = <span class="hljs-keyword">await</span> cognee.search(
        query_text=<span class="hljs-string">"What does Cognee do?"</span>, query_type=SearchType.GRAPH_COMPLETION
    )
    print(<span class="hljs-string">"Graph Result: "</span>)
    print(graph_result)

    rag_result = <span class="hljs-keyword">await</span> cognee.search(
        query_text=<span class="hljs-string">"What does Cognee do?"</span>, query_type=SearchType.RAG_COMPLETION
    )
    print(<span class="hljs-string">"RAG Result: "</span>)
    print(rag_result)

    basic_result = <span class="hljs-keyword">await</span> cognee.search(
        query_text=<span class="hljs-string">"What are the main themes in my data?"</span>
    )
    print(<span class="hljs-string">"Basic Result: "</span>)
    print(basic_result)

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">'__main__'</span>:
    asyncio.run(main())
</code></pre>
<p>📝 <strong>Whats happening here:</strong></p>
<ul>
<li><p>First, we <strong>purge any existing data</strong> using <code>prune_data</code> and <code>prune_system</code>.</p>
</li>
<li><p>Then, we <strong>add new text data</strong> to Cognee using <code>.add</code>.</p>
</li>
<li><p>We <strong>process and build a knowledge graph</strong> with <code>.cognify</code>.</p>
</li>
<li><p>The graph is stored in <code>.artifacts/graph_visualization.html</code>.</p>
</li>
<li><p>Finally, we <strong>run three types of searches</strong>:</p>
<ul>
<li><p><code>GRAPH_COMPLETION</code>  explores relationships.</p>
</li>
<li><p><code>RAG_COMPLETION</code>  generates natural language answers.</p>
</li>
<li><p>Basic Search  retrieves core themes.</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-step-4-run-the-demo"> Step 4: Run the Demo</h3>
<p>Run the script with:</p>
<pre><code class="lang-bash">uv run testing_cognee.py
</code></pre>
<p>You should see outputs for <strong>Graph</strong>, <strong>RAG</strong>, and <strong>Basic</strong> results in your terminal. You can also open the <code>.artifacts/graph_visualization.html</code> file in a browser to view the <strong>knowledge graph</strong> Cognee has generated.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760374692896/8be6e4af-6894-4a68-9444-88e0269a80c6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760374698804/05ab39a6-4044-4916-b853-c4dbc9a3fbf9.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-step-5-run-the-flask-app-locally">🧪 Step 5: Run the Flask App Locally</h3>
<p>Before deploying to the cloud, lets test the Flask app locally:</p>
<pre><code class="lang-bash">uv run app.py
</code></pre>
<p>Once the server starts, open 👉 <a target="_blank" href="http://localhost:5000/">http://localhost:5000</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760374725269/77d88036-b06b-430f-807d-e61e69155fe3.png" alt class="image--center mx-auto" /></p>
<p>Youll see the <strong>Cognee AI Starter App</strong> UI. Heres what you can do:</p>
<ul>
<li><p>Add your own data.</p>
</li>
<li><p>Ask a query.</p>
</li>
<li><p>Wait 12 minutes.</p>
</li>
<li><p>Youll get:</p>
<ul>
<li><p>🧠 <strong>Graph Completion Result</strong></p>
</li>
<li><p>💬 <strong>RAG Completion Result</strong></p>
</li>
<li><p>📝 <strong>Basic Theme of the text</strong></p>
</li>
<li><p>🌐 A <strong>vector graph</strong> visualization generated by Cognee.</p>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760374736673/757b389e-fd88-4364-9d0c-def6b2d9c6b5.png" alt class="image--center mx-auto" /></p>
<p>Click on <strong>View Knowledge Graph</strong> to explore the graph and see how Cognee structured your data.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760374753999/893ebe4c-4c21-47ff-a7f9-4d539c93859d.png" alt class="image--center mx-auto" /></p>
<p> With this, weve understood Cognees core functionality locally.<br />Next up  well take this application to the cloud by <strong>deploying it on AWS ECS using Terraform</strong> 🚀</p>
<hr />
<h2 id="heading-deploying-the-cognee-app-on-aws-ecs-using-terraform"> Deploying the Cognee App on AWS ECS Using Terraform</h2>
<p>Weve successfully tested the <strong>Cognee AI Starter App</strong> locally  now its time to take things to the <strong>cloud</strong> 🌥</p>
<p>Well use <strong>Terraform</strong> to provision all the required AWS infrastructure and <strong>deploy our Flask application on ECS (Fargate)</strong>.</p>
<p>Inside your project directory, navigate to the <code>terra-config</code> folder. This is where all of our Terraform configuration files live.</p>
<h3 id="heading-step-1-understanding-the-terraform-files">🧱 Step 1: Understanding the Terraform Files</h3>
<ul>
<li><p><code>provider.tf</code><br />  Defines AWS as our <strong>cloud provider</strong> and sets the region and provider configuration.</p>
</li>
<li><p><code>default_config.tf</code></p>
<ul>
<li><p>Fetches the <strong>default VPC</strong> and <strong>subnets</strong>.</p>
</li>
<li><p>Creates a <strong>security group</strong> for ECS tasks with port <code>5000</code> open to allow inbound traffic.</p>
</li>
</ul>
</li>
<li><p><code>main.tf</code> <em>(the heart of the project)</em></p>
<ul>
<li><p>Creates an <strong>ECS cluster</strong>.</p>
</li>
<li><p>Defines the <strong>ECS task definition</strong> with:</p>
<ul>
<li><p>Docker image</p>
</li>
<li><p>Container port</p>
</li>
<li><p>CPU &amp; memory configurations</p>
</li>
<li><p>CPU architecture</p>
</li>
</ul>
</li>
<li><p>Creates an <strong>IAM role</strong> for ECS.</p>
</li>
<li><p>Finally, provisions an <strong>ECS service</strong> using the task definition inside the cluster.</p>
</li>
</ul>
</li>
<li><p><code>get_</code><a target="_blank" href="http://ip.sh"><code>ip.sh</code></a><br />  A simple shell script that uses AWS CLI to fetch the <strong>public URL</strong> where your application is running.</p>
</li>
</ul>
<h3 id="heading-step-2-initialize-and-deploy-the-infrastructure"> Step 2: Initialize and Deploy the Infrastructure</h3>
<p>Run the following commands inside the <code>terra-config</code> directory:</p>
<pre><code class="lang-bash">terraform init
terraform plan
terraform apply --auto-approve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760374913575/e5443ce9-3a33-48d4-ba86-40e2313b42f9.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760374922525/713c8169-4d71-4398-b0e0-7708f5a126d1.png" alt class="image--center mx-auto" /></p>
<p> This will take around <strong>2 minutes</strong> to provision the complete infrastructure on AWS  including the ECS cluster, service, task definition, and networking setup.</p>
<h3 id="heading-step-3-get-the-application-url">🌍 Step 3: Get the Application URL</h3>
<p>Once the deployment finishes, make the script executable and run it:</p>
<pre><code class="lang-bash">chmod u+x get_ip.sh
./get_ip.sh
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760374946132/84935ceb-aee5-4813-9656-6ba9df261753.png" alt class="image--center mx-auto" /></p>
<p>This will output the <strong>URL</strong> where your application is hosted.<br />👉 Open the URL in your browser, and youll see your <strong>Flask Cognee application</strong> live on AWS ECS 🎉</p>
<p>From here, you can:</p>
<ul>
<li><p>Add your data.</p>
</li>
<li><p>Ask queries.</p>
</li>
<li><p>See the graph and AI-generated answers exactly like in the local demo.<br />  The only difference is  its now running inside a <strong>scalable, production-grade ECS cluster</strong>.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760374971414/c8806e8a-c518-43fc-a1c8-28657862b643.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760374978112/823b8a99-1b36-4c72-bf9b-6db6d2aa8fc6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760374986206/ab285cf3-5a3e-4b5f-b746-dc93362956ef.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-4-clean-up-resources">🧹 Step 4: Clean Up Resources</h3>
<p>Once youre done testing, its good practice to destroy the infrastructure to avoid unwanted AWS charges:</p>
<pre><code class="lang-bash">terraform destroy --auto-approve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760374994734/156bdb71-e9ce-44e9-b4d7-675495b717cb.png" alt class="image--center mx-auto" /></p>
<p>This will tear down all ECS services, roles, and networking resources you created for this project.</p>
<p> <strong>And thats it!</strong><br />Youve successfully deployed your <strong>Flask Cognee AI Starter App</strong> on AWS ECS using <strong>Terraform</strong>. With just a few commands, we automated the entire infrastructure provisioning and deployment pipeline.</p>
<hr />
<h2 id="heading-conclusion">🎯 Conclusion</h2>
<p>Congratulations! 🎉 Youve successfully taken a <strong>Flask-based Cognee AI Starter App</strong> from your local machine all the way to the <strong>cloud using AWS ECS and Terraform</strong>.</p>
<p>In this blog, we learned how to:</p>
<ul>
<li><p>Understand <strong>Cognee AI</strong> and its memory layer for LLMs.</p>
</li>
<li><p>Explore and test its key operations  <code>.add</code>, <code>.cognify</code>, and <code>.search</code>.</p>
</li>
<li><p>Run the app locally to see how Cognee organizes and queries your data.</p>
</li>
<li><p>Provision AWS infrastructure using <strong>Terraform</strong>.</p>
</li>
<li><p>Deploy the Flask application on <strong>ECS (Fargate)</strong> and access it via a public URL.</p>
</li>
</ul>
<p>This hands-on project demonstrates the power of combining <strong>AI, containerization, and infrastructure automation</strong> to deploy intelligent applications in a <strong>scalable, production-ready environment</strong>.</p>
<h3 id="heading-explore-more">🔗 Explore More</h3>
<ul>
<li><p>Check out <strong>Cognee AI</strong> here: <a target="_blank" href="https://cognee.ai/">https://cognee.ai</a></p>
</li>
<li><p>Follow me on socials for more DevOps, cloud, and AI tutorials:</p>
<ul>
<li><p><strong>GitHub:</strong> <a target="_blank" href="https://github.com/Pravesh-Sudha">https://github.com/Pravesh-Sudha</a></p>
</li>
<li><p><strong>Blog:</strong> <a target="_blank" href="https://blog.praveshsudha.com/">https://blog.praveshsudha.com</a></p>
</li>
<li><p><strong>LinkedIn:</strong> <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha">https://www.linkedin.com/in/pravesh-sudha</a></p>
</li>
</ul>
</li>
</ul>
<p>Keep exploring, keep building, and stay tuned for more <strong>hands-on projects combining AI and DevOps</strong>! 🚀</p>
]]></description><link>https://blog.praveshsudha.com/deploying-cognee-ai-starter-app-on-aws-ecs-using-terraform</link><guid isPermaLink="true">https://blog.praveshsudha.com/deploying-cognee-ai-starter-app-on-aws-ecs-using-terraform</guid><category><![CDATA[Devops]]></category><category><![CDATA[AI]]></category><category><![CDATA[llm]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[Build Your Own AWS DevOps CLI with Python & Boto3]]></title><description><![CDATA[<h2 id="heading-introduction-python-meets-devops-automation">🐍 <strong>Introduction  Python Meets DevOps Automation</strong></h2>
<p>Welcome Devs to the world of <strong>code and automation</strong> 👨💻</p>
<p>Being around the DevOps landscape for more than two years, Ive often come across one of the biggest myths in the field  <em>DevOps Engineers dont code.</em><br />Well <strong>we do code</strong>, and quite a lot of it. From automating repetitive tasks, writing IAM policies, and managing infrastructure as code in HCL, to gluing together entire cloud workflows  code is very much a part of a DevOps engineers daily toolkit.</p>
<p>Its been a while since I last got my hands dirty with Python, so I decided to change that. In todays blog, were kicking off another project in the <strong>Python for DevOps</strong> series.</p>
<p>Well be building a <strong>custom CLI tool using Python3 and Boto3</strong> (AWS SDK for Python) to <strong>manage EC2 instances, S3 buckets, and Lambda functions</strong>  all through the <strong>command line</strong>.</p>
<p>This project is a great way to:</p>
<ul>
<li><p>Strengthen your Python skills 🧠</p>
</li>
<li><p>Get hands-on with <strong>Boto3</strong> 💻</p>
</li>
<li><p>Automate AWS tasks like a pro </p>
</li>
</ul>
<p>So without further ado, lets dive right in and start building something practical.</p>
<hr />
<h2 id="heading-youtube-demonstration">📽 Youtube Demonstration</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/7HkgHraFK64">https://youtu.be/7HkgHraFK64</a></div>
<p> </p>
<hr />
<h2 id="heading-prerequisites">🧰 <strong>Prerequisites</strong></h2>
<p>Before we start building our custom AWS CLI, lets make sure your system is ready to roll.<br />Heres what youll need:</p>
<ul>
<li><p>🪄 <strong>AWS CLI Installed &amp; Configured</strong><br />  You should have the AWS CLI installed and configured with an <strong>IAM user</strong> that has <strong>full access</strong> to:</p>
<ul>
<li><p>EC2</p>
</li>
<li><p>Lambda</p>
</li>
<li><p>IAM</p>
</li>
<li><p>S3</p>
</li>
</ul>
</li>
</ul>
<p>    If you havent set this up yet, Ive got you covered  check out the <strong>Step 1</strong> section of my previous blog here:<br />    👉 <a target="_blank" href="https://blog.praveshsudha.com/learn-how-to-deploy-a-three-tier-application-on-aws-eks-using-terraform-with-best-practices#heading-step-1-set-up-aws-cli-and-iam-user">Learn how to deploy a Three-Tier Application on AWS EKS using Terraform</a></p>
<ul>
<li><p>🐍 <strong>Python 3</strong><br />  Make sure Python 3 is installed on your system.<br />  You can verify it by running:</p>
<pre><code class="lang-bash">  python3 --version
</code></pre>
</li>
</ul>
<p>Once both are ready, we can move on to setting up our <strong>project structure</strong> and start coding. </p>
<hr />
<h2 id="heading-writing-the-core-python-scripts-ec2-s3-amp-lambda-automation">🧠 <strong>Writing the Core Python Scripts  EC2, S3 &amp; Lambda Automation</strong></h2>
<p>The full source code for this project is available on my GitHub repo:<br />👉 <a target="_blank" href="https://github.com/Pravesh-Sudha/python-for-devops">Pravesh-Sudha/python-for-devops</a></p>
<p>Navigate inside the <a target="_blank" href="https://github.com/Pravesh-Sudha/python-for-devops/tree/main/automate-ec2-s3-and-lambda"><code>automate-ec2-s3-and-lambda</code></a> directory, and youll find <strong>three main files</strong>:</p>
<ul>
<li><p><code>ec2_manager.py</code></p>
</li>
<li><p><code>s3_manager.py</code></p>
</li>
<li><p><code>lambda_manager.py</code></p>
</li>
</ul>
<p>These files contain the logic to interact with AWS using <strong>Boto3</strong>, the official AWS SDK for Python.</p>
<h3 id="heading-1-ec2-manager-ec2managerpy">🖥 <strong>1. EC2 Manager </strong> <code>ec2_manager.py</code></h3>
<p>This file handles everything related to EC2 instances: <strong>creating</strong>, <strong>listing</strong>, and <strong>terminating</strong> them.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> boto3
<span class="hljs-keyword">from</span> botocore.exceptions <span class="hljs-keyword">import</span> ClientError

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">EC2Manager</span>:</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, region_name=<span class="hljs-string">"us-east-1"</span></span>):</span>
        self.ec2 = boto3.client(<span class="hljs-string">"ec2"</span>, region_name=region_name)

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">create_instance</span>(<span class="hljs-params">self, image_id=<span class="hljs-string">"ami-0360c520857e3138f"</span>, instance_type=<span class="hljs-string">"t3.micro"</span>, key_name=None</span>):</span>
        <span class="hljs-keyword">try</span>:
            print(<span class="hljs-string">"Creating a Ec2 Instance...."</span>)
            params = {
                <span class="hljs-string">"ImageId"</span>: image_id,
                <span class="hljs-string">"InstanceType"</span>: instance_type,
                <span class="hljs-string">"MinCount"</span>: <span class="hljs-number">1</span>,
                <span class="hljs-string">"MaxCount"</span>: <span class="hljs-number">1</span>
            }
            <span class="hljs-keyword">if</span> key_name:
                params[<span class="hljs-string">"KeyName"</span>] = key_name

            response = self.ec2.run_instances(**params)
            instance_id = response[<span class="hljs-string">"Instances"</span>][<span class="hljs-number">0</span>][<span class="hljs-string">"InstanceId"</span>]
            print(<span class="hljs-string">f"EC2 instance Created successfully with id: <span class="hljs-subst">{instance_id}</span>"</span>)
            <span class="hljs-keyword">return</span> instance_id
        <span class="hljs-keyword">except</span> ClientError <span class="hljs-keyword">as</span> e:
            print(<span class="hljs-string">f"Ec2 creation failed. Error occured: <span class="hljs-subst">{e}</span>"</span>)
            <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">list_instances</span>(<span class="hljs-params">self</span>):</span>
        <span class="hljs-keyword">try</span>:
            response = self.ec2.describe_instances()
            instances = []
            <span class="hljs-keyword">for</span> reservation <span class="hljs-keyword">in</span> response[<span class="hljs-string">"Reservations"</span>]:
                <span class="hljs-keyword">for</span> instance <span class="hljs-keyword">in</span> reservation[<span class="hljs-string">"Instances"</span>]:
                    instance_info = {
                        <span class="hljs-string">"InstanceId"</span>: instance[<span class="hljs-string">"InstanceId"</span>],
                        <span class="hljs-string">"State"</span>: instance[<span class="hljs-string">"State"</span>][<span class="hljs-string">"Name"</span>],
                        <span class="hljs-string">"Type"</span>: instance[<span class="hljs-string">"InstanceType"</span>],
                        <span class="hljs-string">"PublicIP"</span>: instance.get(<span class="hljs-string">"PublicIpAddress"</span>)
                    }
                    instances.append(instance_info)
            <span class="hljs-keyword">return</span> instances
        <span class="hljs-keyword">except</span> ClientError <span class="hljs-keyword">as</span> e:
            print(<span class="hljs-string">f"Failed to get list of Ec2 instances. Error Occured: <span class="hljs-subst">{e}</span>"</span>)
            <span class="hljs-keyword">return</span> []

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">terminate_instance</span>(<span class="hljs-params">self, instance_id</span>):</span>
        <span class="hljs-keyword">try</span>:
            self.ec2.terminate_instances(InstanceIds=[instance_id])
            print(<span class="hljs-string">f"Instance with id <span class="hljs-subst">{instance_id}</span> terminated successfully"</span>)
        <span class="hljs-keyword">except</span> ClientError <span class="hljs-keyword">as</span> e:
            print(<span class="hljs-string">f"Failed to terminate instance <span class="hljs-subst">{instance_id}</span>. Error Occured: <span class="hljs-subst">{e}</span>"</span>)
</code></pre>
<p>📝 <strong>Quick Breakdown:</strong></p>
<ul>
<li><p>We initialize a Boto3 EC2 client inside the class constructor.</p>
</li>
<li><p><code>create_instance()</code> launches a new EC2 instance using a given AMI ID and instance type.</p>
</li>
<li><p><code>list_instances()</code> fetches and displays all existing instances with their state and public IP.</p>
</li>
<li><p><code>terminate_instance()</code> shuts down an instance by its ID.</p>
</li>
<li><p>The <code>try-except</code> block ensures clean error handling and avoids breaking the script when AWS throws an exception.</p>
</li>
</ul>
<h3 id="heading-2-s3-manager-s3managerpy">🪣 <strong>2. S3 Manager </strong> <code>s3_manager.py</code></h3>
<p>This file deals with <strong>creating</strong>, <strong>listing</strong>, and <strong>deleting</strong> S3 buckets.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> boto3
<span class="hljs-keyword">from</span> botocore.exceptions <span class="hljs-keyword">import</span> ClientError

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">S3Manager</span>:</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, region_name=<span class="hljs-string">"us-east-1"</span></span>):</span>
        self.s3 = boto3.client(<span class="hljs-string">"s3"</span>, region_name=region_name)

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">create_bucket</span>(<span class="hljs-params">self, bucket_name</span>):</span>
        <span class="hljs-keyword">try</span>:
            print(<span class="hljs-string">f"Creating a new Bucket name: <span class="hljs-subst">{bucket_name}</span>"</span>)
            self.s3.create_bucket(Bucket=bucket_name)
            print(<span class="hljs-string">f"Bucket Created Successfully."</span>)
        <span class="hljs-keyword">except</span> ClientError <span class="hljs-keyword">as</span> e:
            print(<span class="hljs-string">f"Failed to create Bucket. Error: <span class="hljs-subst">{e}</span>"</span>)

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">list_buckets</span>(<span class="hljs-params">self</span>):</span>
        <span class="hljs-keyword">try</span>:
            response = self.s3.list_buckets()
            buckets = [b[<span class="hljs-string">"Name"</span>] <span class="hljs-keyword">for</span> b <span class="hljs-keyword">in</span> response[<span class="hljs-string">"Buckets"</span>]]
            <span class="hljs-keyword">return</span> buckets
        <span class="hljs-keyword">except</span> ClientError <span class="hljs-keyword">as</span> e:
            print(<span class="hljs-string">f"Failed to list the bucket. Error: <span class="hljs-subst">{e}</span>"</span>)
            <span class="hljs-keyword">return</span> []

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">delete_bucket</span>(<span class="hljs-params">self, bucket_name</span>):</span>
        <span class="hljs-keyword">try</span>:
            print(<span class="hljs-string">f"Deleting Bucket: <span class="hljs-subst">{bucket_name}</span>"</span>)
            self.s3.delete_bucket(Bucket=bucket_name)
            print(<span class="hljs-string">f"Successfully deleted the bucket: <span class="hljs-subst">{bucket_name}</span>"</span>)
        <span class="hljs-keyword">except</span> ClientError <span class="hljs-keyword">as</span> e:
            print(<span class="hljs-string">f"Failed to delete the Bucket. Error: <span class="hljs-subst">{e}</span>"</span>)
</code></pre>
<p>📝 <strong>Quick Breakdown:</strong></p>
<ul>
<li><p>Boto3s S3 client allows easy bucket operations.</p>
</li>
<li><p>We can create new buckets, fetch all bucket names, and delete existing ones.</p>
</li>
<li><p>Again, <code>try-except</code> helps us handle permission issues or invalid operations gracefully.</p>
</li>
</ul>
<h3 id="heading-3-lambda-manager-lambdamanagerpy">🧬 <strong>3. Lambda Manager </strong> <code>lambda_manager.py</code></h3>
<p>This one is a bit more advanced because Lambda also requires an <strong>IAM role</strong> to execute.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> boto3
<span class="hljs-keyword">from</span> botocore.exceptions <span class="hljs-keyword">import</span> ClientError
<span class="hljs-keyword">import</span> zipfile
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> time, json

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">LambdaManager</span>:</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, region_name=<span class="hljs-string">"us-east-1"</span></span>):</span>
        self.lambda_client = boto3.client(<span class="hljs-string">"lambda"</span>, region_name = region_name)
        self.iam_client = boto3.client(<span class="hljs-string">"iam"</span>, region_name = region_name)

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">_create_deployment_package</span>(<span class="hljs-params">self, code_path, zip_name=<span class="hljs-string">"lambda_function.zip"</span></span>):</span>
        <span class="hljs-keyword">with</span> zipfile.ZipFile(zip_name, <span class="hljs-string">"w"</span>) <span class="hljs-keyword">as</span> zf:
            zf.write(code_path, os.path.basename(code_path))
            print(<span class="hljs-string">f"Deployment package built Successfully"</span>)
        <span class="hljs-keyword">return</span> zip_name

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">_get_or_create_role</span>(<span class="hljs-params">self, role_name=<span class="hljs-string">"LambdaBasicExecutionRole"</span></span>):</span>
        assume_role_policy = {
            <span class="hljs-string">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
            <span class="hljs-string">"Statement"</span>: [
                {
                    <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
                    <span class="hljs-string">"Principal"</span>: {<span class="hljs-string">"Service"</span>: <span class="hljs-string">"lambda.amazonaws.com"</span>},
                    <span class="hljs-string">"Action"</span>: <span class="hljs-string">"sts:AssumeRole"</span>
                }
            ]
        }
        <span class="hljs-keyword">try</span>:
            role = self.iam_client.get_role(RoleName = role_name)
            print(<span class="hljs-string">f"Getting the IAM role for Lambda...."</span>)
            <span class="hljs-keyword">return</span> role[<span class="hljs-string">"Role"</span>][<span class="hljs-string">"Arn"</span>]
        <span class="hljs-keyword">except</span> self.iam_client.exceptions.NoSuchEntityException <span class="hljs-keyword">as</span> e:
            print(<span class="hljs-string">f"Creating the IAM role : <span class="hljs-subst">{role_name}</span>...."</span>)
            role = self.iam_client.create_role(
                RoleName = role_name,
                AssumeRolePolicyDocument = json.dumps(assume_role_policy) 
            )
            self.iam_client.attach_role_policy(
                RoleName = role_name,
                PolicyArn = <span class="hljs-string">"arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"</span>
            )
            print(<span class="hljs-string">f"Waiting for the IAM role Propagation..."</span>)
            time.sleep(<span class="hljs-number">10</span>)
            <span class="hljs-keyword">return</span> role[<span class="hljs-string">"Role"</span>][<span class="hljs-string">"Arn"</span>]

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">create_lambda</span>(<span class="hljs-params">self, function_name, code_path, handler_name=<span class="hljs-string">"lambda_function.lambda_handler"</span>, runtime = <span class="hljs-string">"python3.9"</span></span>):</span>        
        <span class="hljs-keyword">try</span>:
            zip_file = self._create_deployment_package(code_path)
            role_arn = self._get_or_create_role()

            <span class="hljs-keyword">with</span> open(zip_file, <span class="hljs-string">"rb"</span>) <span class="hljs-keyword">as</span> f:
                zip_bytes = f.read()

            response = self.lambda_client.create_function(
                FunctionName = function_name,
                Code = {<span class="hljs-string">"ZipFile"</span>: zip_bytes},
                Runtime = runtime,
                Role = role_arn,
                Handler = handler_name,
                Timeout = <span class="hljs-number">15</span>,
                MemorySize = <span class="hljs-number">128</span>, 
            )
            print(<span class="hljs-string">f"Lambda function <span class="hljs-subst">{function_name}</span> created Successfully."</span>)
            <span class="hljs-keyword">return</span> response[<span class="hljs-string">"FunctionArn"</span>]
        <span class="hljs-keyword">except</span> ClientError <span class="hljs-keyword">as</span> e:
            print(<span class="hljs-string">f"Failed to create the lambda function. Error: <span class="hljs-subst">{e}</span>"</span>)

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">list_lambdas</span>(<span class="hljs-params">self</span>):</span>
        <span class="hljs-keyword">try</span>:
            response = self.lambda_client.list_functions()
            lambdas = response.get(<span class="hljs-string">"Functions"</span>,[])
            <span class="hljs-keyword">return</span> [
                {
                <span class="hljs-string">"Name"</span>: fn[<span class="hljs-string">"FunctionName"</span>],
                <span class="hljs-string">"Runtime"</span>: fn[<span class="hljs-string">"Runtime"</span>],
                <span class="hljs-string">"Arn"</span>: fn[<span class="hljs-string">"FunctionArn"</span>]
                }
                <span class="hljs-keyword">for</span> fn <span class="hljs-keyword">in</span> lambdas
            ]
        <span class="hljs-keyword">except</span> ClientError <span class="hljs-keyword">as</span> e:
            print(<span class="hljs-string">f"Failed to list Lambda Functions. Error: <span class="hljs-subst">{e}</span>"</span>)
            <span class="hljs-keyword">return</span> []

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">delete_lambda</span>(<span class="hljs-params">self, function_name</span>):</span>
        <span class="hljs-keyword">try</span>:
            self.lambda_client.delete_function(FunctionName = function_name)
            print(<span class="hljs-string">f"Lambda function: <span class="hljs-subst">{function_name}</span> deleted Successfully."</span>)
        <span class="hljs-keyword">except</span> ClientError <span class="hljs-keyword">as</span> e:
            print(<span class="hljs-string">f"Failed to delete Lambda Function. Error: <span class="hljs-subst">{e}</span>"</span>)
</code></pre>
<p>📝 <strong>Quick Breakdown:</strong></p>
<ul>
<li><p><code>_create_deployment_package()</code> zips your function code for Lambda deployment.</p>
</li>
<li><p><code>_get_or_create_role()</code> ensures an IAM role exists to let Lambda run. If not, it creates one and attaches the basic execution policy.</p>
</li>
<li><p><code>create_lambda()</code> uploads the zipped code and creates a new Lambda function.</p>
</li>
<li><p><code>list_lambdas()</code> and <code>delete_lambda()</code> handle listing and cleanup.</p>
</li>
</ul>
<p>And heres our simple <code>lambda_function.py</code> file that acts as the Lambda code:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> json

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">lambda_handler</span>(<span class="hljs-params">event, context</span>):</span>
    print(<span class="hljs-string">"Hello from Pravesh - Your AWS Community Builder!"</span>) 
    <span class="hljs-keyword">return</span> {
        <span class="hljs-string">'statusCode'</span>: <span class="hljs-number">200</span>,
        <span class="hljs-string">'body'</span>: json.dumps(<span class="hljs-string">'Hello from Pravesh - Your AWS Community Builder!'</span>)
    }
</code></pre>
<h3 id="heading-4-connecting-everything-with-mainpy-cli">🧭 <strong>4. Connecting Everything With</strong> <code>main.py</code> (CLI)</h3>
<p>To make this project actually feel like a <strong>real CLI tool</strong>, well use Pythons built-in <code>argparse</code> module.<br />This allows us to run commands like:</p>
<pre><code class="lang-python">python main.py ec2 create
python main.py s3 list
python main.py <span class="hljs-keyword">lambda</span> delete &lt;function-name&gt;
</code></pre>
<p>Heres the full CLI script 👇</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> argparse
<span class="hljs-keyword">from</span> ec2_manager <span class="hljs-keyword">import</span> EC2Manager
<span class="hljs-keyword">from</span> s3_manager <span class="hljs-keyword">import</span> S3Manager
<span class="hljs-keyword">from</span> lambda_manager <span class="hljs-keyword">import</span> LambdaManager

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">main</span>():</span>
    parser = argparse.ArgumentParser(
        description=<span class="hljs-string">"AWS Automation CLI using Boto3 (EC2 &amp; S3)"</span>
    )

    subparsers = parser.add_subparsers(dest=<span class="hljs-string">"service"</span>, help=<span class="hljs-string">"Choose AWS service"</span>)

    <span class="hljs-comment"># EC2 Commands</span>
    ec2_parser = subparsers.add_parser(<span class="hljs-string">"ec2"</span>, help=<span class="hljs-string">"Manage EC2 instances"</span>)
    ec2_subparsers = ec2_parser.add_subparsers(dest=<span class="hljs-string">"action"</span>, help=<span class="hljs-string">"EC2 actions"</span>)

    ec2_create = ec2_subparsers.add_parser(<span class="hljs-string">"create"</span>, help=<span class="hljs-string">"Create EC2 instance"</span>)
    ec2_create.add_argument(<span class="hljs-string">"--image-id"</span>, default=<span class="hljs-string">"ami-0c94855ba95c71c99"</span>, help=<span class="hljs-string">"AMI ID"</span>)
    ec2_create.add_argument(<span class="hljs-string">"--instance-type"</span>, default=<span class="hljs-string">"t2.micro"</span>, help=<span class="hljs-string">"Instance type"</span>)
    ec2_create.add_argument(<span class="hljs-string">"--key-name"</span>, help=<span class="hljs-string">"Optional key pair name"</span>)

    ec2_subparsers.add_parser(<span class="hljs-string">"list"</span>, help=<span class="hljs-string">"List EC2 instances"</span>)
    ec2_terminate = ec2_subparsers.add_parser(<span class="hljs-string">"terminate"</span>, help=<span class="hljs-string">"Terminate EC2 instance"</span>)
    ec2_terminate.add_argument(<span class="hljs-string">"instance_id"</span>, help=<span class="hljs-string">"Instance ID to terminate"</span>)

    <span class="hljs-comment"># S3 Commands</span>
    s3_parser = subparsers.add_parser(<span class="hljs-string">"s3"</span>, help=<span class="hljs-string">"Manage S3 buckets"</span>)
    s3_subparsers = s3_parser.add_subparsers(dest=<span class="hljs-string">"action"</span>, help=<span class="hljs-string">"S3 actions"</span>)
    s3_subparsers.add_parser(<span class="hljs-string">"list"</span>, help=<span class="hljs-string">"List S3 buckets"</span>)
    s3_create = s3_subparsers.add_parser(<span class="hljs-string">"create"</span>, help=<span class="hljs-string">"Create S3 bucket"</span>)
    s3_create.add_argument(<span class="hljs-string">"bucket_name"</span>, help=<span class="hljs-string">"Bucket name to create"</span>)
    s3_delete = s3_subparsers.add_parser(<span class="hljs-string">"delete"</span>, help=<span class="hljs-string">"Delete S3 bucket"</span>)
    s3_delete.add_argument(<span class="hljs-string">"bucket_name"</span>, help=<span class="hljs-string">"Bucket name to delete"</span>)

    <span class="hljs-comment"># Lambda Commands</span>
    lambda_parser = subparsers.add_parser(<span class="hljs-string">"lambda"</span>, help=<span class="hljs-string">"Manage AWS Lambda functions"</span>)
    lambda_subparsers = lambda_parser.add_subparsers(dest=<span class="hljs-string">"action"</span>, help=<span class="hljs-string">"Lambda actions"</span>)
    lambda_create = lambda_subparsers.add_parser(<span class="hljs-string">"create"</span>, help=<span class="hljs-string">"Create Lambda function"</span>)
    lambda_create.add_argument(<span class="hljs-string">"function_name"</span>, help=<span class="hljs-string">"Lambda function name"</span>)
    lambda_create.add_argument(<span class="hljs-string">"code_path"</span>, help=<span class="hljs-string">"Path to Python file (e.g. lambda_function.py)"</span>)
    lambda_create.add_argument(<span class="hljs-string">"--handler"</span>, default=<span class="hljs-string">"lambda_function.lambda_handler"</span>, help=<span class="hljs-string">"Handler name"</span>)
    lambda_create.add_argument(<span class="hljs-string">"--runtime"</span>, default=<span class="hljs-string">"python3.9"</span>, help=<span class="hljs-string">"Runtime version"</span>)
    lambda_subparsers.add_parser(<span class="hljs-string">"list"</span>, help=<span class="hljs-string">"List Lambda functions"</span>)
    lambda_delete = lambda_subparsers.add_parser(<span class="hljs-string">"delete"</span>, help=<span class="hljs-string">"Delete Lambda function"</span>)
    lambda_delete.add_argument(<span class="hljs-string">"function_name"</span>, help=<span class="hljs-string">"Lambda function name to delete"</span>)

    args = parser.parse_args()

    <span class="hljs-comment"># Service Handling</span>
    <span class="hljs-keyword">if</span> args.service == <span class="hljs-string">"ec2"</span>:
        ec2 = EC2Manager()
        <span class="hljs-keyword">if</span> args.action == <span class="hljs-string">"create"</span>:
            ec2.create_instance(image_id=args.image_id, instance_type=args.instance_type, key_name=args.key_name)
        <span class="hljs-keyword">elif</span> args.action == <span class="hljs-string">"list"</span>:
            instances = ec2.list_instances()
            <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> instances:
                print(<span class="hljs-string">"No EC2 instances found."</span>)
            <span class="hljs-keyword">else</span>:
                print(<span class="hljs-string">"\nEC2 Instances:"</span>)
                <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> instances:
                    print(<span class="hljs-string">f"<span class="hljs-subst">{i[<span class="hljs-string">'InstanceId'</span>]}</span> | <span class="hljs-subst">{i[<span class="hljs-string">'State'</span>]}</span> | <span class="hljs-subst">{i[<span class="hljs-string">'Type'</span>]}</span> | <span class="hljs-subst">{i[<span class="hljs-string">'PublicIP'</span>]}</span>"</span>)
        <span class="hljs-keyword">elif</span> args.action == <span class="hljs-string">"terminate"</span>:
            ec2.terminate_instance(args.instance_id)

    <span class="hljs-keyword">elif</span> args.service == <span class="hljs-string">"s3"</span>:
        s3 = S3Manager()
        <span class="hljs-keyword">if</span> args.action == <span class="hljs-string">"create"</span>:
            s3.create_bucket(args.bucket_name)
        <span class="hljs-keyword">elif</span> args.action == <span class="hljs-string">"list"</span>:
            buckets = s3.list_buckets()
            <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> buckets:
                print(<span class="hljs-string">"No buckets found."</span>)
            <span class="hljs-keyword">else</span>:
                print(<span class="hljs-string">"\nS3 Buckets:"</span>)
                <span class="hljs-keyword">for</span> b <span class="hljs-keyword">in</span> buckets:
                    print(<span class="hljs-string">f"- <span class="hljs-subst">{b}</span>"</span>)
        <span class="hljs-keyword">elif</span> args.action == <span class="hljs-string">"delete"</span>:
            s3.delete_bucket(args.bucket_name)

    <span class="hljs-keyword">elif</span> args.service == <span class="hljs-string">"lambda"</span>:
        lm = LambdaManager()
        <span class="hljs-keyword">if</span> args.action == <span class="hljs-string">"create"</span>:
            lm.create_lambda(args.function_name, args.code_path, handler_name=args.handler, runtime=args.runtime)
        <span class="hljs-keyword">elif</span> args.action == <span class="hljs-string">"list"</span>:
            functions = lm.list_lambdas()
            <span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> functions:
                print(<span class="hljs-string">"No Lambda functions found."</span>)
            <span class="hljs-keyword">else</span>:
                print(<span class="hljs-string">"\nLambda Functions:"</span>)
                <span class="hljs-keyword">for</span> f <span class="hljs-keyword">in</span> functions:
                    print(<span class="hljs-string">f"<span class="hljs-subst">{f[<span class="hljs-string">'Name'</span>]}</span> | <span class="hljs-subst">{f[<span class="hljs-string">'Runtime'</span>]}</span> | <span class="hljs-subst">{f[<span class="hljs-string">'Arn'</span>]}</span>"</span>)
        <span class="hljs-keyword">elif</span> args.action == <span class="hljs-string">"delete"</span>:
            lm.delete_lambda(args.function_name)
    <span class="hljs-keyword">else</span>:
        parser.print_help()

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
    main()
</code></pre>
<p>This is where everything <strong>comes together</strong>. Each manager class is imported, and <code>argparse</code> routes the commands to the right function.</p>
<h2 id="heading-pro-tip-write-the-code-yourself">🧠 <strong>Pro Tip  Write the Code Yourself</strong></h2>
<p>Now comes the most important part  <strong>dont just copy the code.</strong><br />Write the <code>ec2_manager.py</code> file line by line in your own workspace. Understand:</p>
<ul>
<li><p>Why we use <code>try/except</code></p>
</li>
<li><p>How <code>boto3</code> clients are created</p>
</li>
<li><p>How API calls are structured and handled</p>
</li>
</ul>
<p>Then move on to <strong>S3</strong> and <strong>Lambda</strong>.<br />If you get stuck  look back at the repo for hints, but <strong>type it yourself</strong>. This will build <strong>muscle memory</strong> and confidence, which is crucial in the age of AI-assisted development.</p>
<p>Even though tools can generate code for you, <em>knowing how to build it yourself</em> gives you the real power </p>
<h2 id="heading-setting-up-the-virtual-environment">🛠 <strong>Setting Up the Virtual Environment</strong></h2>
<p>Once your files are ready, set up the virtual environment and install dependencies:</p>
<pre><code class="lang-bash">python3 -m venv venv
<span class="hljs-built_in">source</span> venv/bin/activate
pip3 install boto3 botocore
</code></pre>
<h2 id="heading-running-the-cli-commands">🚀 <strong>Running the CLI Commands</strong></h2>
<h3 id="heading-ec2">EC2</h3>
<pre><code class="lang-bash">python main.py ec2 create
python main.py ec2 list
python main.py ec2 terminate &lt;instance-id&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760083800466/4aa15b7d-0ade-45b1-b5a5-fdb21949e109.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-s3">S3</h3>
<pre><code class="lang-bash">python main.py s3 create &lt;bucket-name&gt;
python main.py s3 list
python main.py s3 delete &lt;bucket-name&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760083807804/b5de7f75-9f54-4a5d-9cee-f3ba35917d1e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-lambda">Lambda</h3>
<pre><code class="lang-bash">python main.py lambda create &lt;function-name&gt; lambda_function.py
python main.py lambda list
python main.py lambda delete &lt;function-name&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760083839724/d14e0dc3-e3a0-48e9-8b94-146e11d05c4b.png" alt class="image--center mx-auto" /></p>
<p>Before deleting your Lambda function, open the <strong>AWS Lambda Dashboard</strong>, find your function, and give it a test.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760083861033/02ceed1f-735c-4349-96a2-23754807abd6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760083865624/9b3a6d29-0b23-43eb-96bb-97ad4a9c1f5a.png" alt class="image--center mx-auto" /></p>
<p>Youll see your <code>lambda_function.py</code> code running and the expected output returned in the console </p>
<hr />
<h2 id="heading-conclusion">🏁 <strong>Conclusion</strong></h2>
<p>And thats a wrap, folks! 🎉</p>
<p>We just built a <strong>fully functional Python CLI tool</strong> that can manage your <strong>AWS EC2</strong>, <strong>S3</strong>, and <strong>Lambda</strong> resources  all from the command line. This might look simple at first, but projects like this help you truly understand how <strong>DevOps engineers use code to automate and simplify daily cloud operations</strong>.</p>
<p>This is just the beginning. You can extend this CLI to include:</p>
<ul>
<li><p>More AWS services like RDS, DynamoDB, and CloudWatch 📈</p>
</li>
<li><p>Advanced features like error handling and logging 🧭</p>
</li>
<li><p>Role-based actions with IAM policies 🔐</p>
</li>
</ul>
<p>Ive pushed the entire project to GitHub.<br />👉 <a class="post-section-overview" href="#"><strong>Make sure to fork and star the repo</strong></a> if you found it useful  it really motivates me to create more such DevOps + Python projects.</p>
<p>If you build your own version, Id love to see it! Share it with me or tag me on socials:</p>
<ul>
<li><p>🐦 Twitter: <a target="_blank" href="https://twitter.com/praveshstwt">@praveshstwt</a></p>
</li>
<li><p>💼 LinkedIn: <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha">Pravesh Sudha</a></p>
</li>
<li><p>📝 Blog: <a target="_blank" href="https://blog.praveshsudha.com/">blog.praveshsudha.com</a></p>
</li>
<li><p>🐙 GitHub: <a target="_blank" href="https://github.com/PraveshSudha">github.com/PraveshSudha</a></p>
</li>
</ul>
<p>Until next time  keep automating and keep building. 💪</p>
]]></description><link>https://blog.praveshsudha.com/build-your-own-aws-devops-cli-with-python-and-boto3</link><guid isPermaLink="true">https://blog.praveshsudha.com/build-your-own-aws-devops-cli-with-python-and-boto3</guid><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Python]]></category><category><![CDATA[lambda]]></category><category><![CDATA[ec2]]></category><category><![CDATA[S3]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[Terraform Modules: The Secret Sauce to Scalable Infrastructure]]></title><description><![CDATA[<h2 id="heading-introduction">💡 Introduction</h2>
<p>Welcome to the world of <strong>Infrastructure and Automation</strong>! 🚀<br />In todays post, were going to explore one of the fundamental building blocks of Terraform  the <strong>Module</strong>.</p>
<p>If youve been using Terraform for a while, you already know how quickly your configuration files can grow messy as your infrastructure expands. Thats where modules come to the rescue. They help you organize, reuse, and standardize your Terraform code  making your infrastructure cleaner, more scalable, and much easier to maintain.</p>
<p>In this guide, well break down:</p>
<ul>
<li><p><strong>What a Terraform module is</strong></p>
</li>
<li><p><strong>How its structured</strong></p>
</li>
<li><p><strong>Why using modules is beneficial</strong></p>
</li>
<li><p>And finally, <strong>how to create your own module</strong> for a real-world use case.</p>
</li>
</ul>
<p>To wrap things up, well get hands-on and build a Terraform module for both <strong>VPC</strong> and <strong>EC2</strong>, deploy an instance, and serve a simple <code>index.html</code> page from it.</p>
<p>So, without further ado  lets dive right in! 🌍</p>
<hr />
<h2 id="heading-youtube-demonstration">💡 Youtube Demonstration</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/_qmebISFHM8">https://youtu.be/_qmebISFHM8</a></div>
<p> </p>
<hr />
<h2 id="heading-pre-requisites">💡 Pre-Requisites</h2>
<p>Before we dive into building our Terraform module, lets make sure your workspace is all set up and ready to go. Heres what youll need:</p>
<ul>
<li><p>🧑💻 <strong>An AWS Account</strong> with an <strong>IAM user</strong> that has <strong>Full Access</strong> to both <strong>VPC</strong> and <strong>EC2</strong>.<br />  The IAM user should have an <strong>Access Key</strong> generated and configured with the <strong>AWS CLI</strong> on your system.<br />  If youre new to this step, Ive covered it in detail here:<br />  👉 <a target="_blank" href="https://blog.praveshsudha.com/learn-how-to-deploy-a-three-tier-application-on-aws-eks-using-terraform-with-best-practices#heading-step-1-set-up-aws-cli-and-iam-user">Set up AWS CLI and IAM User (Step 1 from my EKS blog)</a></p>
</li>
<li><p> <strong>Basic knowledge of Terraform</strong>  understanding what it is and how it works will make things much smoother.</p>
</li>
</ul>
<p>Once you have these requirements in place, were all set to kick off our journey into <strong>Terraform Modules</strong>! 🎯</p>
<hr />
<h2 id="heading-what-is-a-terraform-module">💡 What is a Terraform Module?</h2>
<p>Terraform lets you define your infrastructure as code using <strong>HashiCorp Configuration Language (HCL)</strong>. And like any programming language, it embraces one golden principle  <strong>DRY (Dont Repeat Yourself)</strong>.</p>
<p>Instead of repeatedly provisioning similar resources (like networking components or security groups) for every new environment, Terraform allows you to <strong>encapsulate</strong> those recurring configurations inside a <strong>module</strong>.</p>
<p>In simple terms, a <strong>Terraform module</strong> is a <strong>collection of Terraform configuration files</strong> (<code>.tf</code> or <code>.tf.json</code>) that live together in the same directory and work as a single logical unit.</p>
<p>Here are a few examples of what Terraform modules can represent:</p>
<ul>
<li><p>🧱 An <strong>AWS VPC module</strong> containing subnets, route tables, and internet gateways</p>
</li>
<li><p>💾 A <strong>Microsoft SQL Always On cluster</strong> in Azure, including NSGs</p>
</li>
<li><p> A <strong>GCP Project module</strong> that enables APIs and sets permissions</p>
</li>
</ul>
<hr />
<h4 id="heading-why-modules-matter"><strong>Why Modules Matter</strong></h4>
<p>Imagine youre setting up an EC2 instance the traditional way  defining every component manually: the security group, IAM role, instance profile, and the instance itself. Thats a lot of repeated configuration!</p>
<p>Heres what that would look like 👇</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"my_app_sg"</span> {
  name        = <span class="hljs-string">"my-app-sg"</span>
  vpc_id      = var.vpc_id
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = <span class="hljs-string">"tcp"</span>
    cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
  }
}

resource <span class="hljs-string">"aws_iam_role"</span> <span class="hljs-string">"my_app_role"</span> {
  name = <span class="hljs-string">"my_app_role"</span>
  assume_role_policy = &lt;&lt;EOF
{ ... }
EOF
}

resource <span class="hljs-string">"aws_iam_instance_profile"</span> <span class="hljs-string">"my_app_profile"</span> {
  name = <span class="hljs-string">"my_app_profile"</span>
  role = aws_iam_role.my_app_role.name
}

resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"my_app_server"</span> {
  ami           = <span class="hljs-string">"ami-0c55b159fbd7718e9"</span>
  instance_type = <span class="hljs-string">"t3.micro"</span>
  vpc_security_group_ids = [aws_security_group.my_app_sg.id]
  iam_instance_profile   = aws_iam_instance_profile.my_app_profile.name
  subnet_id              = var.subnet_id
  user_data              = file(<span class="hljs-string">"setup.sh"</span>)
  tags = {
    Name = <span class="hljs-string">"my-app-server"</span>
  }
}
</code></pre>
<p>Now, lets see how <strong>modules</strong> make this effortless and elegant:</p>
<pre><code class="lang-bash">module <span class="hljs-string">"ec2_instance"</span> {
  <span class="hljs-built_in">source</span>  = <span class="hljs-string">"terraform-aws-modules/ec2-instance/aws"</span>
  version = <span class="hljs-string">"5.6.0"</span>

  name          = <span class="hljs-string">"my-app-server"</span>
  ami           = <span class="hljs-string">"ami-0c55b159fbd7718e9"</span>
  instance_type = <span class="hljs-string">"t3.micro"</span>
  subnet_id     = var.subnet_id

  tags = {
    Name = <span class="hljs-string">"my-app-server"</span>
  }

  <span class="hljs-comment"># Configuration for managed resources</span>
  create_iam_instance_profile = <span class="hljs-literal">true</span>
  iam_role_name               = <span class="hljs-string">"my_app_role"</span>
  iam_role_policy_json        = <span class="hljs-string">"{ ... }"</span>

  security_group_ingress = [
    {
      description = <span class="hljs-string">"HTTP access from anywhere"</span>
      from_port   = 80
      to_port     = 80
      protocol    = <span class="hljs-string">"tcp"</span>
      cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
    }
  ]
}
</code></pre>
<p>See the difference?<br />Instead of defining every resource from scratch, you just <strong>call a module</strong> and pass the required inputs. Terraform takes care of the rest!</p>
<p>Well create our <strong>own module</strong> in the practical demonstration later  but this example perfectly shows why modules are so powerful.</p>
<hr />
<h4 id="heading-why-use-terraform-modules"><strong>Why Use Terraform Modules?</strong></h4>
<p>Heres how modules simplify your infrastructure management:</p>
<ul>
<li><p>📦 <strong>Package related resources together</strong> into a reusable configuration</p>
</li>
<li><p>👨👩👧👦 <strong>Share standardized configurations</strong> across your team or projects</p>
</li>
<li><p>💡 <strong>Embrace DRY principles</strong>, reducing repetitive code</p>
</li>
<li><p> <strong>Minimize human error</strong> when referencing complex resources manually</p>
</li>
</ul>
<hr />
<h4 id="heading-quick-question-how-are-terraform-resources-different-from-modules"><strong>Quick Question: How Are Terraform Resources Different from Modules?</strong></h4>
<p>Thats a great distinction to understand early on 👇</p>
<ul>
<li><p>A <strong>resource</strong> in Terraform describes <strong>a single piece of infrastructure</strong>  like a VPC, subnet, EC2 instance, or IAM role.</p>
</li>
<li><p>A <strong>module</strong>, on the other hand, is a <strong>collection of resources</strong> grouped together to form a reusable and logical unit  for example, a complete VPC setup or an EC2 deployment stack.</p>
</li>
</ul>
<hr />
<h2 id="heading-benefits-of-using-terraform-modules"><strong>💡 Benefits of Using Terraform Modules</strong></h2>
<p>Now that we understand what modules are and how they work, lets explore <strong>why</strong> theyre such an essential part of Terraform best practices.</p>
<p>Using modules isnt just about cleaner code  its about making your infrastructure <strong>reusable</strong>, <strong>scalable</strong>, and <strong>collaborative</strong>. Lets look at the key benefits 👇</p>
<h4 id="heading-1-reusability">🧩 <strong>1. Reusability</strong></h4>
<p>Just like every programming language has <strong>functions</strong>, <strong>classes</strong>, or <strong>libraries</strong>, Terraform has <strong>modules</strong>.<br />They allow you to <strong>abstract a set of resources</strong> into a single reusable component that can be used across multiple projects or environments.</p>
<p>For example, once youve built a <strong>VPC module</strong>, you can reuse it for your <strong>dev</strong>, <strong>staging</strong>, and <strong>production</strong> environments  without rewriting a single line of networking code.</p>
<p>In short: <strong>build once, use anywhere.</strong></p>
<h4 id="heading-2-scalability">🚀 <strong>2. Scalability</strong></h4>
<p>Modules make scaling your infrastructure <strong>seamless</strong>.</p>
<p>Lets say you have a <strong>security group</strong> defined for your development environment that allows traffic on port <code>27017</code> for MongoDB. Instead of manually updating that configuration across multiple environments, you can simply <strong>update the module</strong> and propagate the change everywhere its used.</p>
<p>This ensures <strong>consistency</strong> and saves tons of time  especially when managing large-scale infrastructure across multiple environments.</p>
<h4 id="heading-3-team-collaboration">🤝 <strong>3. Team Collaboration</strong></h4>
<p>As your Terraform usage grows across teams, modules help maintain <strong>standardization</strong> and <strong>best practices</strong>.</p>
<p>Platform teams can create a <strong>catalog of pre-approved modules</strong> (for example, VPC, EC2, or EKS modules) that application teams can directly use. This ensures that all deployed infrastructure meets <strong>security</strong>, <strong>compliance</strong>, and <strong>organizational standards</strong>  while allowing developers to focus on their applications.</p>
<p>In short, modules enable <strong>smooth collaboration</strong>, <strong>reduce misconfigurations</strong>, and promote a <strong>shared IaC culture</strong> within your organization.</p>
<hr />
<h2 id="heading-a-practical-demonstration"><strong>💡 A Practical Demonstration</strong></h2>
<p>Alright, now that weve covered the theory  its time to get our hands dirty! 🧑💻</p>
<p>Earlier, we looked at an example using the <strong>official AWS EC2 module</strong>, but in this section, well <strong>create our own Terraform modules</strong> from scratch.<br />Our goal? To build a <strong>modular Terraform setup</strong> that provisions a <strong>VPC</strong>, <strong>Security Group</strong>, and <strong>EC2 instance</strong>, which in turn hosts a <strong>static portfolio page</strong>.</p>
<h4 id="heading-what-well-be-building">🧱 <strong>What Well Be Building</strong></h4>
<p>Well use some of the most common AWS infrastructure components:</p>
<ul>
<li><p><strong>VPC</strong> (Virtual Private Cloud)</p>
</li>
<li><p><strong>Subnets</strong></p>
</li>
<li><p><strong>Internet Gateway (IGW)</strong></p>
</li>
<li><p><strong>Route Table (RT)</strong></p>
</li>
<li><p><strong>Security Group (SG)</strong></p>
</li>
<li><p><strong>EC2 Instance</strong></p>
</li>
</ul>
<p>These resources form the <strong>foundation of most web applications</strong>, whether its an e-commerce site, a personal portfolio, or a marketing page.</p>
<p>In production environments, infrastructure code is usually <strong>broken down into reusable modules</strong> to:</p>
<ul>
<li><p>Avoid duplication</p>
</li>
<li><p>Improve maintainability</p>
</li>
<li><p>Enable better team collaboration</p>
</li>
</ul>
<p>For example, a DevOps team might maintain a <strong>VPC module</strong> thats reused across multiple environments  <strong>dev</strong>, <strong>staging</strong>, and <strong>production</strong>  ensuring consistent infrastructure everywhere.</p>
<p>Our demo follows the same principle. Well organize our project into <strong>separate modules</strong> for <code>vpc</code>, <code>security-group</code>, and <code>ec2</code>, each containing its own:</p>
<ul>
<li><p><code>main.tf</code>  core resource definitions</p>
</li>
<li><p><code>variables.tf</code>  variable declarations</p>
</li>
<li><p><code>outputs.tf</code>  exported outputs</p>
</li>
</ul>
<p>Well keep variables flexible (no default values) so they can be customized per environment.</p>
<h4 id="heading-automation-with-userdata"> <strong>Automation with</strong> <code>user_data</code></h4>
<p>Inside our <strong>EC2 module</strong>, well use a <code>user_data</code> script to automatically install <strong>Apache</strong> and deploy a simple <code>index.html</code> page.</p>
<p>In real-world deployments, such automation scripts (or tools like <strong>Ansible</strong> or <strong>Chef</strong>) are used to configure servers automatically at startup  eliminating the need for manual setup.<br />This approach is standard for bootstrapping <strong>web servers</strong>, <strong>application backends</strong>, or <strong>API services</strong>.</p>
<h4 id="heading-project-structure">📂 <strong>Project Structure</strong></h4>
<p>The complete code is available in my GitHub repository:<br />👉 <a target="_blank" href="https://github.com/Pravesh-Sudha/terra-projects">https://github.com/Pravesh-Sudha/terra-projects</a></p>
<p>Navigate to the <code>terra-modules</code> directory, and youll find the following structure:</p>
<pre><code class="lang-bash">terra-modules/
|
-- main.tf                 <span class="hljs-comment"># Calls our custom modules</span>
 variables.tf                 
-- outputs.tf
|
 modules/
    vpc/
       main.tf
       variables.tf
       outputs.tf
   
    security-group/
       main.tf
       variables.tf
       outputs.tf
   
    ec2/
        main.tf
        variables.tf
        outputs.tf
</code></pre>
<p>Here, each submodule (<code>vpc</code>, <code>security-group</code>, <code>ec2</code>) defines its own configuration logic, and the root <code>main.tf</code> file simply <strong>calls these modules</strong> and passes the required variables.</p>
<p>Once everything is set, our <strong>static website server configuration</strong> will be ready to deploy.</p>
<h4 id="heading-running-the-terraform-project">🚀 <strong>Running the Terraform Project</strong></h4>
<p>Now lets see it in action!</p>
<ol>
<li><p><strong>Initialize Terraform</strong></p>
<pre><code class="lang-bash"> terraform init
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759906146438/98e3843a-a583-4bb1-b764-fe5fbc76d9ef.png" alt class="image--center mx-auto" /></p>
<p> This initializes the project and fetches all three modules.</p>
</li>
<li><p><strong>Review the execution plan</strong></p>
<pre><code class="lang-bash"> terraform plan
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759906154615/6aeeecd3-c58e-42fe-b4ee-109a23d2bf82.png" alt class="image--center mx-auto" /></p>
<p> Youll see that Terraform plans to create <strong>7 resources</strong> in total.</p>
</li>
<li><p><strong>Deploy the infrastructure</strong></p>
<pre><code class="lang-bash"> terraform apply --auto-approve
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759906168106/92c5093b-d7b7-41ea-b0cc-7905387cc921.png" alt class="image--center mx-auto" /></p>
<p> Wait for a minute or two  once the deployment completes, Terraform will output the <strong>public IP address</strong> of your EC2 instance.</p>
</li>
<li><p><strong>View your static website</strong><br /> Copy the public IP into your browser, and youll see your <strong>web server running</strong>  serving the <code>index.html</code> page! 🎉</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759906181124/4784f16b-b363-415a-a916-c074384a16ff.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h4 id="heading-cleanup">🧹 <strong>Cleanup</strong></h4>
<p>When youre done experimenting, make sure to <strong>tear down the infrastructure</strong> to avoid unnecessary AWS costs:</p>
<pre><code class="lang-bash">terraform destroy --auto-approve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759906205405/9e398423-b70c-49b2-bbc3-ebe096ae89c7.png" alt class="image--center mx-auto" /></p>
<p>With that, youve successfully created your <strong>own set of Terraform modules</strong>  a foundational skill for structuring production-grade Infrastructure as Code (IaC).</p>
<p>In case you are wondering how to reuse the components, here is an example for your file to create another VPC using our custom module:</p>
<pre><code class="lang-bash">module <span class="hljs-string">"vpc"</span> <span class="hljs-string">"another-vpc"</span> {
    <span class="hljs-built_in">source</span> = <span class="hljs-string">"./modules/vpc"</span>
    vpc_cidr = <span class="hljs-string">"&lt;Provide the CIDR&gt;"</span>
    subnet_cidr = <span class="hljs-string">"&lt;Provide the CIDR&gt;"</span>
    app_name = var.app_name
}

variable <span class="hljs-string">"app_name"</span> {
    default = <span class="hljs-string">"Another-App"</span>
    <span class="hljs-built_in">type</span> = string
    description = <span class="hljs-string">"Name of the Application"</span>
}
</code></pre>
<hr />
<h2 id="heading-conclusion"><strong>💡 Conclusion</strong></h2>
<p>And thats a wrap! 🎉</p>
<p>In this guide, we explored the <strong>core concept of Terraform modules</strong>  what they are, why theyre beneficial, and how they make your infrastructure <strong>reusable</strong>, <strong>scalable</strong>, and <strong>team-friendly</strong>.<br />We then put theory into practice by creating our <strong>own Terraform modules</strong> for <strong>VPC</strong>, <strong>Security Group</strong>, and <strong>EC2</strong>, and deployed a working <strong>static website</strong> on AWS  all using clean, modular Terraform code.</p>
<p>By now, you should have a solid understanding of how to:</p>
<ul>
<li><p>Structure Terraform projects into reusable modules</p>
</li>
<li><p>Simplify complex configurations</p>
</li>
<li><p>Collaborate efficiently within teams</p>
</li>
<li><p>Keep your infrastructure code DRY and maintainable</p>
</li>
</ul>
<p>The next step? Try expanding your modules! Add resources like <strong>S3 buckets</strong>, <strong>RDS databases</strong>, or <strong>Load Balancers</strong>  and see how modular design keeps your Terraform journey organized and production-ready.</p>
<h4 id="heading-references"><strong>References</strong></h4>
<p>If youd like to go deeper, here are the key references I used while crafting this guide:</p>
<ul>
<li><p><a target="_blank" href="https://developer.hashicorp.com/terraform/language/modules">Terraform Modules  Official HashiCorp Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://www.env0.com/blog/terraform-modules">env0 Blog  Terraform Modules Explained</a></p>
</li>
<li><p><a target="_blank" href="https://spacelift.io/blog/what-are-terraform-modules-and-how-do-they-work">Spacelift Blog  What Are Terraform Modules and How Do They Work</a></p>
</li>
</ul>
<h4 id="heading-lets-connect"><strong>Lets Connect 🌐</strong></h4>
<p>If you enjoyed this blog and want to explore more about <strong>DevOps, Terraform, and Cloud automation</strong>, feel free to connect with me here:</p>
<ul>
<li><p>💼 <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha/">LinkedIn</a></p>
</li>
<li><p>🐦 <a target="_blank" href="https://x.com/praveshstwt">Twitter / X</a></p>
</li>
<li><p>📺 <a target="_blank" href="https://www.youtube.com/@pravesh-sudha">YouTube</a></p>
</li>
<li><p>📝 <a target="_blank" href="https://blog.praveshsudha.com/">Blog Website</a></p>
</li>
</ul>
<p>If you found this post helpful, share it with your DevOps peers and drop your thoughts in the comments  Id love to hear how <em>you</em> use Terraform modules in your projects!</p>
<p>👋 Adios Amigos!</p>
]]></description><link>https://blog.praveshsudha.com/terraform-modules-the-secret-sauce-to-scalable-infrastructure</link><guid isPermaLink="true">https://blog.praveshsudha.com/terraform-modules-the-secret-sauce-to-scalable-infrastructure</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Programming Blogs]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[🌟 Terraform Meets DevSecOps: 5 Security Practices You Can’t Afford to Ignore]]></title><description><![CDATA[<h2 id="heading-introduction">💡 Introduction</h2>
<p>Welcome, Devs 👋 to the exciting world of <strong>Infrastructure as Code (IaC)</strong> and automation!<br />If youve been working with Terraform, you already know how powerful it is in spinning up infrastructure in minutes. But with great power comes well, the need for <strong>great security</strong>.</p>
<p>In todays cloud-first job market, security isnt optionalits a <strong>core skill</strong>. DevOps engineers are now expected to think like <strong>DevSecOps engineers</strong>, ensuring that every piece of code, every module, and every configuration follows security best practices. Why? Because misconfigured infrastructure is like leaving your front door unlockedhackers love it.</p>
<p>Thats exactly what were diving into today. In this blog, Ill walk you through <strong>Terraform security best practices</strong> that will help you safeguard your cloud environments, avoid costly mistakes, and step confidently into the world of DevSecOps.</p>
<p>So, grab your coffee  and lets get started! 🚀</p>
<hr />
<h2 id="heading-youtube-demonstration">💡 Youtube Demonstration</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/AgFcX-H3SJU">https://youtu.be/AgFcX-H3SJU</a></div>
<p> </p>
<hr />
<h2 id="heading-1-verify-modules-and-providers">1. Verify Modules and Providers 🔍</h2>
<p>When working with Terraform, your <strong>providers and modules</strong> are just like external dependencies in application development. And just like you wouldnt blindly install a random library in your production code, you shouldnt blindly trust Terraform modules or providers either.</p>
<p>Think of it this way:</p>
<ul>
<li><p><strong>Providers</strong> are your bridge to cloud platforms (AWS, Azure, GCP, etc.).</p>
</li>
<li><p><strong>Modules</strong> are reusable building blocks that make your Terraform code cleaner and more scalable.</p>
</li>
</ul>
<p>But since both come from external sources, treating them with security-first practices is a must.</p>
<h3 id="heading-always-pin-the-source-and-version"> Always Pin the Source and Version</h3>
<p>Instead of using a vague provider declaration like this:</p>
<pre><code class="lang-bash">provider <span class="hljs-string">"aws"</span> {
    region = <span class="hljs-string">"us-east-1"</span>
}
</code></pre>
<p>Lock it down with a clear <strong>source</strong> and <strong>version</strong>:</p>
<pre><code class="lang-bash">terraform {  
  required_providers {
    aws = {
      <span class="hljs-built_in">source</span>  = <span class="hljs-string">"hashicorp/aws"</span>      
      version = <span class="hljs-string">"~&gt; 5.98.0"</span>    
    }  
  }
}
</code></pre>
<p>This ensures youre not pulling in an unverified or malicious provider update without realizing it. Think of it as freezing dependencies in your app codepredictability and security go hand in hand.</p>
<h3 id="heading-for-organizations">🏢 For Organizations</h3>
<p>If youre working in a team or an enterprise environment, youll likely have a <strong>private registry</strong> in place. Thats a great way to enforce version control and ensure everyone only uses <strong>approved and vetted providers/modules</strong>.</p>
<p>Some best practices here:</p>
<ul>
<li><p>Use a <strong>filesystem or network mirror</strong> to point to an internal artifact repo (similar to how you store container images).</p>
</li>
<li><p>Implement a <strong>minimal module registry API</strong> so teams can share trusted Terraform modules across projects.</p>
</li>
<li><p>Review and approve new modules/providers before they go into production pipelines.</p>
</li>
</ul>
<p>In short: <strong>treat Terraform providers and modules like you treat code librariesverify, version, and control them.</strong></p>
<hr />
<h2 id="heading-2-dont-store-the-state-file-locally">2. Dont Store the State File Locally 🔒</h2>
<p>One of the most common mistakes Terraform beginners (and sometimes even pros) make is <strong>keeping the state file (</strong><code>terraform.tfstate</code>) on their local machine.</p>
<p>Why is this risky?</p>
<ul>
<li><p>Your state file contains <strong>sensitive data</strong> like resource IDs, configuration details, and even secrets.</p>
</li>
<li><p>If stored locally or pushed to a public repo (e.g., GitHub), its basically like handing over your <strong>cloud blueprint</strong> to attackers.</p>
</li>
<li><p>A leaked state file can lead to <strong>account compromise</strong> or <strong>full infra takeover</strong>.</p>
</li>
</ul>
<h3 id="heading-best-practice-remote-and-encrypted-storage"> Best Practice: Remote and Encrypted Storage</h3>
<p>Instead of keeping state locally, store it in a <strong>secure, remote backend</strong> that your team and organization can access safely. Make sure its <strong>encrypted</strong> to protect against unauthorized access.</p>
<p>Heres an example using <strong>AWS S3 with encryption and versioning</strong>:</p>
<pre><code class="lang-bash">terraform {
  backend <span class="hljs-string">"s3"</span> {
    bucket       = <span class="hljs-string">"my-terraform-state-bucket"</span>
    key          = <span class="hljs-string">"my-terraform-state.tfstate"</span>
    region       = <span class="hljs-string">"eu-west-1"</span>
    use_lockfile = <span class="hljs-literal">true</span>
    encrypt      = <span class="hljs-literal">true</span>
  }
}
</code></pre>
<h3 id="heading-a-note-on-state-locking">🔑 A Note on State Locking</h3>
<p>Earlier, Terraform state locking was usually handled with <strong>S3 + DynamoDB</strong>. But with recent updates, S3 now has the <code>use_lockfile</code> property, making it easier to prevent race conditions when multiple people or pipelines try to update the state at the same time.</p>
<h3 id="heading-quick-checklist">🚀 Quick Checklist</h3>
<ul>
<li><p>Never commit state files to GitHub (or any VCS).</p>
</li>
<li><p>Always enable <strong>encryption</strong> and <strong>versioning</strong> in your remote backend.</p>
</li>
<li><p>Use <strong>role-based access control (RBAC)</strong> to restrict who can read/write to the state.</p>
</li>
</ul>
<p>In short: <strong><em>treat your Terraform state like a password vaultit holds the keys to your entire infrastructure.</em></strong> 🔐</p>
<hr />
<h2 id="heading-3-detect-vulnerabilities-early">3. Detect Vulnerabilities Early 🕵</h2>
<p>Security isnt something you add at the end. With Terraform (and IaC in general), you want to <strong>shift security left</strong>meaning you catch issues while writing code, not after deploying it to production.</p>
<p>Thats where <strong>static code analysis tools</strong> come in. These tools scan your Terraform configuration and flag potential misconfigurations or risky patterns <em>before</em> they ever touch your cloud environment.</p>
<h3 id="heading-what-they-catch">🚨 What They Catch</h3>
<ul>
<li><p><strong>Unintended resource exposure</strong> (e.g., open security groups, public S3 buckets).</p>
</li>
<li><p><strong>Weak encryption setups</strong> (like missing <code>encrypt = true</code>).</p>
</li>
<li><p><strong>Misconfigurations</strong> that could lead to compliance violations.</p>
</li>
</ul>
<h3 id="heading-popular-tools-for-terraform-security">🛠 Popular Tools for Terraform Security</h3>
<ul>
<li><p><a target="_blank" href="https://aquasecurity.github.io/tfsec/"><strong>tfsec</strong></a><strong>:</strong> Lightweight, easy-to-use, and integrates well with CI/CD pipelines.</p>
</li>
<li><p><a target="_blank" href="https://www.checkov.io/"><strong>Checkov</strong></a><strong>:</strong> Great for policy-as-code, compliance checks, and multi-cloud environments.</p>
</li>
<li><p><a target="_blank" href="https://runterrascan.io/"><strong>Terrascan</strong></a><strong>:</strong> One of the most popular tools, with 500+ built-in policies for Terraform, Kubernetes, and more.</p>
</li>
</ul>
<h3 id="heading-example-scanning-with-terrascan"> Example: Scanning with Terrascan</h3>
<p>Run this command to analyze your Terraform code:</p>
<pre><code class="lang-bash">terrascan scan -f /path/to/terraform/code
</code></pre>
<p>It will quickly point out vulnerabilities, risky resources, or compliance issues<strong>saving you hours of debugging and reducing security risks before deployment</strong>.</p>
<p>👉 Pro tip: Integrate these scans into your <strong>CI/CD pipeline</strong> so every commit or PR is automatically checked for vulnerabilities. That way, youre not just automating infra deploymentyoure automating <strong>secure infra deployment</strong>. 🚀</p>
<hr />
<h2 id="heading-4-apply-the-principle-of-least-privilege">4. Apply the Principle of Least Privilege 🔑</h2>
<p>When it comes to securing cloud infrastructure, one golden rule stands out: <strong>never give more access than necessary.</strong></p>
<p>This is the <strong>Principle of Least Privilege (PoLP)</strong>granting only the minimum level of permissions required for a resource or user to function. By following PoLP, you reduce the attack surface and limit potential damage in case of a breach.</p>
<p>Think of it like this: if your friend only needs to borrow your car keys, dont hand over the keys to your house too.</p>
<h3 id="heading-why-it-matters-in-terraform"> Why It Matters in Terraform</h3>
<p>Terraform often interacts with your cloud provider (AWS, Azure, GCP) to spin up or manage resources. If the IAM roles or service accounts used by Terraform are <strong>too permissive</strong>, attackers can exploit that access to move laterally, exfiltrate data, or even shut down your infra.</p>
<h3 id="heading-example-minimal-policy-for-reading-terraform-state-from-s3">📝 Example: Minimal Policy for Reading Terraform State from S3</h3>
<p>Heres a sample IAM policy for <strong>an EC2 instance that only needs to read the Terraform state file</strong>:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
  <span class="hljs-attr">"Statement"</span>: [
    {
      <span class="hljs-attr">"Sid"</span>: <span class="hljs-string">"AllowListAndLocation"</span>,
      <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
      <span class="hljs-attr">"Action"</span>: [
        <span class="hljs-string">"s3:ListBucket"</span>,
        <span class="hljs-string">"s3:GetBucketLocation"</span>
      ],
      <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::your-state-file-bucket"</span>
    },
    {
      <span class="hljs-attr">"Sid"</span>: <span class="hljs-string">"AllowReadStateFile"</span>,
      <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
      <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"s3:GetObject"</span>,
      <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::your-state-file-bucket/path/to/your/terraform.tfstate"</span>
    }
  ]
}
</code></pre>
<p>Notice how this policy <strong>doesnt allow write/delete actions</strong>just enough access to list and read the state file.</p>
<h3 id="heading-best-practices">🚀 Best Practices</h3>
<ul>
<li><p>Always scope IAM roles/policies <strong>to the specific bucket, object, or resource path</strong>.</p>
</li>
<li><p>Avoid using wildcards (<code>*</code>) unless absolutely necessary.</p>
</li>
<li><p>Regularly audit IAM policies to check for over-privileged accounts.</p>
</li>
<li><p>Use tools like <strong>AWS IAM Access Analyzer</strong> to catch unintended permissions.</p>
</li>
</ul>
<p>In short: <strong><em>lock the doors you dont need to open.</em> By sticking to PoLP, you make life harder for attackers and safer for your team.</strong></p>
<hr />
<h2 id="heading-5-dont-modify-the-terraform-state-file-manually">5. Dont Modify the Terraform State File Manually </h2>
<p>Your <strong>Terraform state file</strong> is the single source of truth for your infrastructure. It keeps track of all the resources Terraform manages. Because of that, <strong>manually editing the state file is a recipe for disaster</strong>.</p>
<p>Why?</p>
<ul>
<li><p>It can <strong>corrupt the state</strong>, making Terraform lose track of your infra.</p>
</li>
<li><p>It introduces <strong>configuration drift</strong>your code and your actual infra wont match.</p>
</li>
<li><p>Worst case, it can lead to <strong>unexpected resource destruction</strong> (yep, Terraform might just wipe out live resources).</p>
</li>
</ul>
<h3 id="heading-the-wrong-way"> The Wrong Way</h3>
<p>Lets say you have this configuration for an EC2 instance:</p>
<pre><code class="lang-bash">provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-string">"us-east-1"</span>
}

resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"web_server"</span> {
  ami           = <span class="hljs-string">"ami-0c55b159f2f12217c"</span>
  instance_type = <span class="hljs-string">"t2.micro"</span>
}
</code></pre>
<p>Now you decide to rename <code>web_server</code> to <code>app_server</code>. You update the code like this:</p>
<pre><code class="lang-bash">provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-string">"us-east-1"</span>
}

resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"app_server"</span> {
  ami           = <span class="hljs-string">"ami-0c55b159f2f12217c"</span>
  instance_type = <span class="hljs-string">"t2.micro"</span>
}
</code></pre>
<p>When you run <code>terraform plan</code>, Terraform sees this as:</p>
<ul>
<li><p><strong>Delete</strong>: <code>aws_instance.web_server</code></p>
</li>
<li><p><strong>Create</strong>: <code>aws_</code><a target="_blank" href="http://instance.app"><code>instance.app</code></a><code>_server</code></p>
</li>
</ul>
<p>Meaningyour EC2 instance is destroyed and replaced, causing unnecessary downtime.</p>
<h3 id="heading-the-right-way"> The Right Way</h3>
<p>Instead of editing the state file or letting Terraform destroy and recreate resources, use the built-in <strong>state management command</strong>:</p>
<pre><code class="lang-bash">terraform state mv aws_instance.web_server aws_instance.app_server
</code></pre>
<p>This safely <strong>moves the resource address</strong> in the state file without touching the actual EC2 instance. Terraform will now recognize the resource under its new name, keeping your infra intact.</p>
<h3 id="heading-best-practice-recap">🚀 Best Practice Recap</h3>
<ul>
<li><p>Never open and edit <code>terraform.tfstate</code> directly.</p>
</li>
<li><p>Always use Terraform CLI (<code>terraform state mv</code>, <code>terraform state rm</code>, <code>terraform import</code>, etc.) to manage resources.</p>
</li>
<li><p>Commit your config changes along with proper state moves to avoid drift.</p>
</li>
</ul>
<p>Remember: <strong><em>your state file is like the brain of Terraformdont perform surgery on it without the right tools.</em> 🧠🔧</strong></p>
<hr />
<p>Awesome pointer 🙌 This is the heart of the blogthe <strong>hands-on demonstration</strong> that ties everything together. Ill write it step by step in a practical, tutorial-like tone while reinforcing the 5 best practices we already covered. Heres the draft:</p>
<hr />
<h2 id="heading-6-practical-demonstration">6. Practical Demonstration 🛠</h2>
<p>Now that weve gone through the <strong>5 Terraform security best practices</strong>, lets put them into action. Nothing beats seeing these principles applied in a real-world example.</p>
<p>Well set up a simple Terraform configuration that:</p>
<ul>
<li><p>Pins the <strong>provider source and version</strong>.</p>
</li>
<li><p>Stores the <strong>state file securely in S3 with encryption + locking</strong>.</p>
</li>
<li><p>Implements <strong>Principle of Least Privilege (PoLP)</strong> with IAM policies.</p>
</li>
<li><p>Demonstrates <strong>state management commands</strong> instead of manual edits.</p>
</li>
<li><p>Uses <strong>Terrascan</strong> to detect misconfigurations early.</p>
</li>
</ul>
<p>Ready? Lets dive in 🚀</p>
<p>To make things easy, I have hosted the code of this project on my Github account, just got the following repo:<br /><a target="_blank" href="https://github.com/Pravesh-Sudha/terra-projects"><strong>Best-Practices project</strong></a><br />Navigate inside the best-practices dir.</p>
<h3 id="heading-step-1-create-maintf-and-specify-provider">Step 1: Create <code>main.tf</code> and Specify Provider</h3>
<pre><code class="lang-bash">terraform {
  required_providers {
    aws = {
      <span class="hljs-built_in">source</span>  = <span class="hljs-string">"hashicorp/aws"</span>
      version = <span class="hljs-string">"6.14.1"</span>
    }
  }
}

provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-string">"us-east-1"</span>
}
</code></pre>
<p>Here, weve <strong>pinned the provider source and version</strong> (<code>hashicorp/aws</code> at <code>6.14.1</code>) and specified the region. This ensures consistency and avoids pulling in unverified updates.</p>
<h3 id="heading-step-2-add-an-ec2-instance-with-polp-iam-policy">Step 2: Add an EC2 Instance with PoLP IAM Policy</h3>
<p>Now lets create an EC2 instance and enforce <strong>Principle of Least Privilege</strong> by attaching an IAM policy that allows it to <em>only read</em> the Terraform state file from S3.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># IAM Policy Document for S3 State Access</span>
data <span class="hljs-string">"aws_iam_policy_document"</span> <span class="hljs-string">"s3_state_access_doc"</span> {
  statement {
    sid    = <span class="hljs-string">"AllowListAndLocation"</span>
    effect = <span class="hljs-string">"Allow"</span>
    actions = [
      <span class="hljs-string">"s3:ListBucket"</span>,
      <span class="hljs-string">"s3:GetBucketLocation"</span>
    ]
    resources = [<span class="hljs-string">"arn:aws:s3:::my-pravesh-terraform-state-bucket-2025"</span>]
  }

  statement {
    sid    = <span class="hljs-string">"AllowReadStateFile"</span>
    effect = <span class="hljs-string">"Allow"</span>
    actions = [
      <span class="hljs-string">"s3:GetObject"</span>
    ]
    resources = [<span class="hljs-string">"arn:aws:s3:::my-pravesh-terraform-state-bucket-2025/terraform/terraform.tfstate"</span>]
  }
}

<span class="hljs-comment"># Create the IAM Policy</span>
resource <span class="hljs-string">"aws_iam_policy"</span> <span class="hljs-string">"s3_state_access_policy"</span> {
  name        = <span class="hljs-string">"EC2-S3-State-Read-Policy"</span>
  description = <span class="hljs-string">"Policy for EC2 to read the Terraform state file."</span>
  policy      = data.aws_iam_policy_document.s3_state_access_doc.json
}

<span class="hljs-comment"># Create the IAM Role</span>
resource <span class="hljs-string">"aws_iam_role"</span> <span class="hljs-string">"ec2_s3_role"</span> {
  name               = <span class="hljs-string">"EC2-S3-State-Reader-Role"</span>
  assume_role_policy = jsonencode({
    Version = <span class="hljs-string">"2012-10-17"</span>,
    Statement = [
      {
        Action = <span class="hljs-string">"sts:AssumeRole"</span>,
        Effect = <span class="hljs-string">"Allow"</span>,
        Principal = {
          Service = <span class="hljs-string">"ec2.amazonaws.com"</span>
        },
        Sid = <span class="hljs-string">""</span>
      },
    ],
  })
}

<span class="hljs-comment"># Attach Policy to Role</span>
resource <span class="hljs-string">"aws_iam_role_policy_attachment"</span> <span class="hljs-string">"s3_state_attach"</span> {
  role       = aws_iam_role.ec2_s3_role.name
  policy_arn = aws_iam_policy.s3_state_access_policy.arn
}

<span class="hljs-comment"># Create the Instance Profile</span>
resource <span class="hljs-string">"aws_iam_instance_profile"</span> <span class="hljs-string">"ec2_s3_profile"</span> {
  name = <span class="hljs-string">"EC2-S3-State-Reader-Profile"</span>
  role = aws_iam_role.ec2_s3_role.name
}

<span class="hljs-comment"># Get Latest Ubuntu AMI</span>
data <span class="hljs-string">"aws_ami"</span> <span class="hljs-string">"ubuntu"</span> {
  most_recent = <span class="hljs-literal">true</span>
  owners      = [<span class="hljs-string">"amazon"</span>]

  filter {
    name   = <span class="hljs-string">"name"</span>
    values = [<span class="hljs-string">"ubuntu/images/hvm-ssd/ubuntu-*-amd64-server-*"</span>]
  }

  filter {
    name   = <span class="hljs-string">"virtualization-type"</span>
    values = [<span class="hljs-string">"hvm"</span>]
  }
}

<span class="hljs-comment"># Create the EC2 Instance</span>
resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"web_server"</span> {
  ami                  = data.aws_ami.ubuntu.id
  instance_type        = <span class="hljs-string">"t2.micro"</span>
  key_name             = <span class="hljs-string">"default-ec2"</span> <span class="hljs-comment"># &lt;&lt;-- Update this with your SSH key</span>
  iam_instance_profile = aws_iam_instance_profile.ec2_s3_profile.name

  tags = {
    Name = <span class="hljs-string">"testing-instance"</span>
  }
}
</code></pre>
<p>🔑 <strong>Notes:</strong></p>
<ul>
<li><p>Make sure you have an SSH key pair (<code>default-ec2</code>) in <strong>us-east-1</strong>. If not, create one in the AWS Console.</p>
</li>
<li><p>Ensure your <strong>default security group</strong> in the default VPC allows inbound traffic on <strong>Port 22 (SSH)</strong>.</p>
</li>
</ul>
<h3 id="heading-step-3-configure-backend-backendtf">Step 3: Configure Backend (<code>backend.tf</code>)</h3>
<pre><code class="lang-bash">terraform {
  backend <span class="hljs-string">"s3"</span> {
    bucket       = <span class="hljs-string">"my-pravesh-terraform-state-bucket-2025"</span>
    key          = <span class="hljs-string">"terraform/terraform.tfstate"</span>
    region       = <span class="hljs-string">"us-east-1"</span>
    use_lockfile = <span class="hljs-literal">true</span>
    encrypt      = <span class="hljs-literal">true</span>
  }
}
</code></pre>
<p>This stores the <strong>state file remotely in S3</strong> with <strong>encryption + locking</strong> enabled.</p>
<h3 id="heading-step-4-create-the-s3-bucket">Step 4: Create the S3 Bucket</h3>
<pre><code class="lang-bash">aws s3 mb s3://my-pravesh-terraform-state-bucket-2025
</code></pre>
<h3 id="heading-step-5-initialise-and-deploy">Step 5: Initialise and Deploy</h3>
<pre><code class="lang-bash">terraform init --upgrade
terraform plan
terraform apply --auto-approve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759242299053/41ff603f-5f17-407f-9891-7337f63b8b2d.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759242306464/c9322b17-e5a0-4cdd-b79a-1e919f30df80.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759242316515/10252600-acc2-440d-a39c-1bdada8dddef.png" alt class="image--center mx-auto" /></p>
<p>💡 Terraform should show <strong>5 resources to add</strong>. Within 23 minutes, your infrastructure will be ready!</p>
<h3 id="heading-step-6-test-polp-in-action">Step 6: Test PoLP in Action</h3>
<ol>
<li><p>Go to your AWS Console  <strong>EC2 Dashboard</strong>  Select <code>testing-instance</code>  Click <strong>Connect</strong>.</p>
</li>
<li><p>SSH into the instance.</p>
</li>
<li><p>Install AWS CLI:</p>
</li>
</ol>
<pre><code class="lang-bash">sudo apt update
sudo apt install awscli -y
</code></pre>
<ol start="4">
<li>Try listing objects in the S3 bucket (works ):</li>
</ol>
<pre><code class="lang-bash">aws s3 ls s3://my-pravesh-terraform-state-bucket-2025/terraform
</code></pre>
<ol start="5">
<li>Try deleting the state file (blocked ):</li>
</ol>
<pre><code class="lang-bash">aws s3 rm s3://my-pravesh-terraform-state-bucket-2025/terraform/terraform.tfstate
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759242332739/e27664cb-1dbf-4cbd-9ede-5a892c81fc96.png" alt class="image--center mx-auto" /></p>
<p>Youll get <strong>AccessDenied</strong>, proving our <strong>PoLP IAM policy works</strong>! 🔐</p>
<h3 id="heading-step-7-scan-with-terrascan">Step 7: Scan with Terrascan</h3>
<p>Install Terrascan from <a target="_blank" href="https://github.com/tenable/terrascan?tab=readme-ov-file#install">GitHub</a>.</p>
<p>Then run it against your Terraform code:</p>
<pre><code class="lang-bash">terrascan scan -f main.tf
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759242354006/ecc9e3fc-3176-4a0f-a55b-42ada85635f3.png" alt class="image--center mx-auto" /></p>
<p>Terrascan will flag any misconfigurations or potential security risks before deployment.</p>
<hr />
<h3 id="heading-step-8-safe-state-management">Step 8: Safe State Management</h3>
<p>Now lets rename our EC2 resource without downtime.</p>
<p>Instead of changing <code>web_server</code>  <code>app_server</code> in code and letting Terraform destroy/recreate the resource, use:</p>
<pre><code class="lang-bash">terraform state mv aws_instance.web_server aws_instance.app_server
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759242364516/af8df344-d762-42bb-8da9-f695caf1f225.png" alt class="image--center mx-auto" /></p>
<p>You can verify the update by downloading the state file from the S3 console. No downtime, no driftjust clean state management. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759242372986/e6e4aa3a-b8c3-403b-95dc-117eb542dc56.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-9-clean-up">Step 9: Clean up</h3>
<p>Once you are done with the project, make sure to use the following command to delete the resources to avoid incurring charges:</p>
<pre><code class="lang-bash">terraform destroy --auto-approve

aws s3 rm s3://my-pravesh-terraform-state-bucket-2025 --recursive
aws s3 rb s3://my-pravesh-terraform-state-bucket-2025
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759242267279/e15b379a-1f8d-42e3-a43e-0ad0def52771.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759242258169/016d093e-fc0a-41b1-bf0e-836e0f63e350.png" alt class="image--center mx-auto" /></p>
<p>🎯 And thats it!</p>
<hr />
<h2 id="heading-conclusion">🏁 Conclusion</h2>
<p>In this project, we walked through <strong>five essential Terraform security best practices</strong> and then implemented them in a hands-on demonstration. From enabling state file encryption, configuring remote backends, enforcing version pinning, and using <code>terrascan</code> for security scanning, to applying the <strong>Principle of Least Privilege (PLoP)</strong> with IAM roles and policies  we covered a complete workflow to secure Terraform in production-grade environments.</p>
<p>Security is never a one-time task; its an ongoing process. As your infrastructure grows, make sure you constantly revisit and update your security practices to minimize risks. With Terraform, its not just about provisioning resources  its about doing so <strong>securely, scalably, and responsibly</strong>.</p>
<p>If you found this guide useful, feel free to connect with me and check out more of my work 👇</p>
<p>🔗 <strong>Connect with me</strong></p>
<ul>
<li><p>🌐 Website/Blog: <a target="_blank" href="https://blog.praveshsudha.com/">https://blog.praveshsudha.com</a></p>
</li>
<li><p>💼 LinkedIn: <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha/">https://www.linkedin.com/in/pravesh-sudha/</a></p>
</li>
<li><p>🐦 Twitter/X: <a target="_blank" href="https://x.com/praveshstwt">https://x.com/praveshstwt</a></p>
</li>
<li><p>🎥 YouTube: <a target="_blank" href="https://www.youtube.com/@pravesh-sudha">https://www.youtube.com/@pravesh-sudha</a></p>
</li>
</ul>
<p>👋 Adios, see you in next one!</p>
]]></description><link>https://blog.praveshsudha.com/terraform-meets-devsecops-5-security-practices-you-cant-afford-to-ignore</link><guid isPermaLink="true">https://blog.praveshsudha.com/terraform-meets-devsecops-5-security-practices-you-cant-afford-to-ignore</guid><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[🚀 CI/CD for Terraform with GitHub Actions: Deploying a Node.js + Redis App on AWS]]></title><description><![CDATA[<h2 id="heading-introduction">💡 Introduction</h2>
<p>Welcome to the world of <strong>pipelines and automation</strong> 🚀. In this guide, were going to uncover an exciting project where we deploy a <strong>Node.js + Redis web app</strong> on AWS using <strong>Terraform</strong> and <strong>GitHub Actions</strong> for seamless integration.</p>
<p>This walkthrough is designed with <strong>beginners in mind</strong>so even if youre just starting with DevOps or cloud automation, youll be able to follow along step by step.</p>
<p>The application itself is simple but practical: a <strong>Request Counter app</strong> that tracks the number of visits and stores the data in Redis. But the real magic isnt the appits the way well set up <strong>Infrastructure as Code (IaC)</strong> with Terraform, integrate it with <strong>GitHub Actions CI/CD</strong>, and see how everything ties together.</p>
<p>So without further ado, lets roll up our sleeves and get hands-on with some practical knowledge 💡.</p>
<hr />
<h2 id="heading-pre-requisites">🛠 Pre-requisites</h2>
<p>Before diving into the project, lets make sure our host system is ready with all the essentials. Heres what youll need:</p>
<ul>
<li><p><strong>AWS Account + IAM User with Admin Access (Only for testing purposes, in prod, always use PLOP)</strong><br />  Since well be spinning up infrastructure on AWS, you need an IAM user with proper permissions and AWS CLI configured locally.<br />  👉 If youre new to this, dont worryIve explained the <strong>entire setup process (IAM user, AWS CLI, and Terraform installation)</strong> in one of my earlier blogs:<br />  <a target="_blank" href="https://blog.praveshsudha.com/learn-how-to-deploy-a-three-tier-application-on-aws-eks-using-terraform-with-best-practices#heading-create-an-iam-user">Learn How to Deploy a Three-Tier Application on AWS EKS Using Terraform with Best Practices</a></p>
</li>
<li><p><strong>Docker &amp; Docker Compose</strong><br />  Our application runs in containers, so Docker and Docker Compose are a must. You can install them easily from their official documentation:<br />  <a target="_blank" href="https://docs.docker.com/get-started/get-docker/">Install Docker</a></p>
</li>
</ul>
<p> Once these are configured, were all set to start building and automating this project!</p>
<hr />
<h2 id="heading-video-demonstration">💡 Video Demonstration</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/D0w_1a3fYhM">https://youtu.be/D0w_1a3fYhM</a></div>
<p> </p>
<hr />
<h2 id="heading-getting-started-with-the-project">🚀 Getting Started with the Project</h2>
<p>The complete code for this project is available on my GitHub repository:<br />👉 <a target="_blank" href="https://github.com/Pravesh-Sudha/nginx-node-redis.git">nginx-node-redis</a></p>
<p>To begin, fork this repository under your own GitHub username and then clone it locally using the commands below:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/&lt;your-username&gt;/nginx-node-redis.git
<span class="hljs-built_in">cd</span> nginx-node-redis/
</code></pre>
<p> <strong>Why this repo?</strong><br />The project originally comes from Dockers official <code>dockersamples</code> collection. Its a simple <strong>Node.js + Redis request counter</strong> application with an NGINX load balancer in front. Every time you refresh the page, the counter increments and you also see the hostname (<code>web1</code> or <code>web2</code>) that served your request.</p>
<p>Ive taken that base project and <strong>leveled it up</strong> to make it more attractive and production-friendly.</p>
<h3 id="heading-key-highlights-of-the-project">🔑 Key Highlights of the Project</h3>
<ul>
<li><p><code>web/server.js</code>  Basic Node.js app that connects to Redis on port <code>6379</code> and returns:</p>
<ul>
<li><p>The number of visits (increments with every refresh)</p>
</li>
<li><p>The hostname (<code>web1</code> or <code>web2</code>) serving the request</p>
</li>
<li><p>Runs on port <code>5000</code></p>
</li>
<li><p>Now enhanced with a <strong>beautiful HTML UI</strong> instead of plain text 🎨</p>
</li>
</ul>
</li>
<li><p><code>nginx/nginx.conf</code>  Custom configuration with:</p>
<ul>
<li><p>An upstream load balancer pointing to <code>web1:5000</code> and <code>web2:5000</code></p>
</li>
<li><p>Proxy pass rules to evenly distribute traffic</p>
</li>
<li><p>Dockerfile that replaces the default NGINX config with our custom one</p>
</li>
</ul>
</li>
<li><p><code>docker-compose.yml</code>  Glues everything together:</p>
<ul>
<li><p><strong>Redis container</strong> (database)</p>
</li>
<li><p><strong>Two Node.js containers</strong> (<code>web1</code> and <code>web2</code>)</p>
</li>
<li><p><strong>NGINX container</strong> (acting as a reverse proxy and load balancer)</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-my-enhancements"> My Enhancements</h3>
<ul>
<li><p> Improved UI in <code>web/server.js</code></p>
</li>
<li><p>📦 Added <strong>Terraform configuration</strong> (<code>terra-config/</code>) to deploy the app on AWS</p>
</li>
<li><p> Added <strong>GitHub Actions workflow</strong> (<code>.github/workflows/main.yml</code>) for CI/CD automation</p>
</li>
</ul>
<p>With these changes, the project is no longer just a demoits a <strong>mini production-grade system</strong> that you can deploy with one push.</p>
<hr />
<h2 id="heading-testing-locally-with-docker-compose">🐳 Testing Locally with Docker Compose</h2>
<p>Before deploying this app on AWS with Terraform, lets test it locally to make sure everything works as expected.</p>
<p>Navigate to the <strong>project root directory</strong> and run the following command:</p>
<pre><code class="lang-bash">docker-compose up --build
</code></pre>
<p> This will:</p>
<ul>
<li><p>Build Docker images for <strong>Redis</strong>, <strong>Node.js web servers</strong>, and <strong>NGINX</strong></p>
</li>
<li><p>Start up all the containers in the correct order</p>
</li>
<li><p>Expose the application through NGINX on <strong>port 80</strong></p>
</li>
</ul>
<p>If everything is set up properly, youll see logs confirming the containers are running. Most importantly, look for this message in your terminal:</p>
<pre><code class="lang-python">The web server <span class="hljs-keyword">is</span> listening on Port <span class="hljs-number">5000.</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757419980581/4a107a35-7f82-4aa4-86fd-fdcdbdccadd6.png" alt class="image--center mx-auto" /></p>
<p>Now, open your browser and go to 👉 <a target="_blank" href="http://localhost/"><strong>http://localhost:80</strong></a></p>
<p>🎉 You should see the application live!</p>
<ul>
<li><p>Every time you <strong>refresh the page</strong>, the counter will increment by <code>+1</code></p>
</li>
<li><p>The hostname will change between <strong>web1</strong> and <strong>web2</strong>, showing that load balancing is working perfectly</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757420174783/2d611565-8a22-40a2-ac85-721fe8c0a78e.png" alt class="image--center mx-auto" /></p>
<p>This step confirms that the <strong>Docker + NGINX + Redis setup is solid</strong> before we move on to cloud deployment.</p>
<hr />
<h2 id="heading-integrating-github-actions-and-terraform"> Integrating GitHub Actions and Terraform</h2>
<p>Now that our app works locally, lets automate deployment with <strong>GitHub Actions + Terraform</strong>. This will allow us to:</p>
<ul>
<li><p>Provision infrastructure on AWS automatically</p>
</li>
<li><p>Deploy the app onto an EC2 instance</p>
</li>
<li><p>Run health checks</p>
</li>
<li><p>Tear down resources after testing (to save 💰 on AWS bills)</p>
</li>
</ul>
<h3 id="heading-step-1-add-aws-secrets">🔑 Step 1: Add AWS Secrets</h3>
<ol>
<li><p>Go to your <strong>GitHub Dashboard</strong>  open the <code>nginx-node-redis</code> repo</p>
</li>
<li><p>Navigate to <strong>Settings &gt; Secrets and Variables &gt; Actions</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757426882779/35364700-903a-40f9-9436-5dc81a2e1f44.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Add the following <strong>Repository Secrets</strong> (from the IAM user you created earlier):</p>
<ul>
<li><p><code>AWS_ACCESS_KEY_ID</code></p>
</li>
<li><p><code>AWS_SECRET_ACCESS_KEY</code></p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757426889800/b1aa4e6f-5841-4a40-b9f7-c8cb5ed475f3.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-2-terraform-configuration">🏗 Step 2: Terraform Configuration</h3>
<p>Inside the <code>terra-config/main.tf</code> file:</p>
<ul>
<li><p>Uses the <strong>default VPC</strong></p>
</li>
<li><p>Fetches the latest <strong>Ubuntu AMI</strong></p>
</li>
<li><p>Creates a <strong>Security Group</strong> with <strong>port 22 (SSH)</strong> and <strong>port 80 (HTTP)</strong> open</p>
</li>
<li><p>Provisions a <strong>t2.micro EC2 instance</strong> using that SG and AMI</p>
</li>
<li><p>Includes a <strong>user-data script</strong> that installs Docker, Docker Compose, and runs our app</p>
</li>
</ul>
<p>📌 In short: Terraform handles all infra creation and boots up an EC2 instance that runs our Request Counter app.</p>
<p> <strong>Important Change</strong>:<br />In the <code>user_data</code> section of <code>main.tf</code>, update the GitHub repo URL to point to <strong>your forked repo</strong>:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/&lt;your-github-username&gt;/nginx-node-redis.git
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757427016537/5404c312-9768-41ca-bc23-2beca964556f.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-3-github-actions-workflow"> Step 3: GitHub Actions Workflow</h3>
<p>The <code>.github/workflows/main.yml</code> pipeline is triggered on <strong>push</strong> or <strong>pull requests</strong>. Heres what happens step by step:</p>
<ol>
<li><p> <strong>Checkout</strong> the repository</p>
</li>
<li><p> <strong>Setup Terraform</strong></p>
</li>
<li><p>📂 Run <code>terraform init</code>, <code>terraform validate</code>, <code>terraform plan</code></p>
</li>
<li><p>🚀 Apply infrastructure with <code>terraform apply</code></p>
</li>
<li><p> <strong>Wait 90 seconds</strong>  allows user-data to finish setting up Docker + app</p>
</li>
<li><p>🌍 <strong>Fetch the Public IP</strong> of the EC2 instance</p>
</li>
<li><p>🔎 <strong>Health check</strong>  ensure the app is live</p>
</li>
<li><p> Keep the application running for <strong>5 minutes</strong></p>
</li>
<li><p>🧹 <strong>Auto teardown</strong> with <code>terraform destroy</code>  ensures no unused AWS resources are left behind</p>
</li>
</ol>
<h3 id="heading-step-4-trigger-the-workflow"> Step 4: Trigger the Workflow</h3>
<p>Once youve updated your repo URL in <code>main.tf</code>, commit and push changes:</p>
<pre><code class="lang-bash">git status
git add terra-config/main.tf
git commit -m <span class="hljs-string">"Updated GitHub repo URL"</span>
git push origin main
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757427125777/8c0af645-0b6b-453a-a350-0245c8aa7aba.png" alt class="image--center mx-auto" /></p>
<p>This push will trigger the <strong>GitHub Actions workflow</strong>, automatically deploying your app to AWS.</p>
<hr />
<h2 id="heading-testing-the-deployment-on-aws">🚀 Testing the Deployment on AWS</h2>
<p>Once everything is set up, its time to watch the magic happen .</p>
<ol>
<li><p>Go to your <strong>GitHub Repository Dashboard</strong></p>
</li>
<li><p>Click on <strong>Actions</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757427735496/8de75141-eec1-4fb6-9436-8a0c99f4d04c.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Find your recent commit  for example:</p>
<pre><code class="lang-python"> <span class="hljs-string">"Updated GitHub repo URL"</span>
</code></pre>
</li>
<li><p>Open the workflow run  youll see the <strong>terraform job</strong> executing step by step.</p>
</li>
</ol>
<p>🟢 Wait until the workflow reaches the <strong>Keep App Running Stage</strong>. At this point, Terraform has already created your EC2 instance, installed Docker &amp; Docker Compose, and started the application.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757427776802/aaebc7f1-6b7e-4b99-a756-48afac906fc9.png" alt class="image--center mx-auto" /></p>
<p>👉 Click on the <strong>Public URL</strong> shown in the logs, and <strong>Viola!</strong> 🎉<br />Your <strong>Request Counter App</strong> is now live on an AWS EC2 instance 🚀.</p>
<ul>
<li><p>You can interact with the app for <strong>5 minutes</strong></p>
</li>
<li><p>Every refresh increments the counter</p>
</li>
<li><p>Requests are distributed between <strong>web1</strong> and <strong>web2</strong> via the <strong>Nginx load balancer</strong></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757427842339/6c018680-9c52-440c-831c-2ea81c510b66.png" alt class="image--center mx-auto" /></p>
<p> After 5 minutes, GitHub Actions will automatically run <code>terraform destroy</code> to clean up resources and avoid unnecessary AWS charges 💸.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757427822722/78e22c0b-f7f2-4e70-b1b1-cda366d72c56.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-real-life-scenario-updating-your-app-on-the-fly">🔄 Real-Life Scenario: Updating Your App on the Fly</h2>
<p>Now that our app is live, lets simulate a <strong>real production change</strong>.</p>
<p>Imagine your manager says:</p>
<blockquote>
<p>"Hey, can you update the heading from <em>Welcome to Request Counter</em> to <em>Welcome to My Amazing Request Counter</em>?"</p>
</blockquote>
<p>Heres how simple it becomes with <strong>GitHub Actions + Terraform</strong>:</p>
<ol>
<li><p>Open the <code>web/server.js</code> file in your project</p>
</li>
<li><p>Find the <code>&lt;h1&gt;</code> HTML tag and update it:</p>
<pre><code class="lang-html"> <span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>Welcome to My Amazing Request Counter<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757429807203/c6c7853f-0638-4ecc-a15d-fab09dd05734.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Push the changes to your repository:</p>
<pre><code class="lang-bash"> git status
 git add web/server.js
 git commit -m <span class="hljs-string">"Heading Changed"</span>
 git push origin main
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757429820618/ddad1cf6-5949-4e80-a5a1-80a54d3f7947.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p>🎉 Thats it! GitHub Actions will pick up the changes, trigger the workflow, and within <strong>5 minutes</strong> your updated heading will be <strong>live on the EC2 instance</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757429837828/2acd09e7-6317-4b5e-8c5c-3857a0201d57.png" alt class="image--center mx-auto" /></p>
<p> This is the true <strong>power of CI/CD pipelines</strong>  no manual SSH into servers, no redeploying by hand. Just push your code and let <strong>Terraform + GitHub Actions</strong> handle the rest 🚀.</p>
<hr />
<h2 id="heading-conclusion">🏁 Conclusion</h2>
<p>In this blog, we took a simple <strong>Node.js + Redis counter application</strong> and supercharged it with <strong>Nginx, Docker, Terraform, and GitHub Actions</strong>.</p>
<p>What started as a local demo quickly transformed into a <strong>cloud-ready, automated CI/CD pipeline</strong>:</p>
<ul>
<li><p>🚀 <strong>Docker + Docker Compose</strong> handled local testing and containerization</p>
</li>
<li><p> <strong>Terraform</strong> provisioned AWS infrastructure seamlessly</p>
</li>
<li><p>🔄 <strong>GitHub Actions</strong> automated deployments and teardown, giving us a clean, cost-effective workflow</p>
</li>
<li><p>🎯 And most importantly  we saw how easy it is to make real-time changes that go live with just a <code>git push</code>.</p>
</li>
</ul>
<p>This project proves how <strong>infrastructure as code + automation</strong> can save developers hours of repetitive work and make production-ready workflows more reliable.</p>
<p>Thanks for following along  I hope this inspires you to build your own <strong>Terraform + GitHub Actions pipelines</strong>!</p>
<p>📬 <strong>Lets Connect</strong>:</p>
<ul>
<li><p>🌐 Website: <a target="_blank" href="https://praveshsudha.com/">praveshsudha.com</a></p>
</li>
<li><p>💼 LinkedIn: <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha/">Pravesh Sudha</a></p>
</li>
<li><p>🐦 Twitter/X: <a target="_blank" href="https://x.com/praveshstwt">@praveshstwt</a></p>
</li>
<li><p>📺 YouTube: <a target="_blank" href="https://www.youtube.com/@pravesh-sudha">Pravesh Sudha</a></p>
</li>
</ul>
]]></description><link>https://blog.praveshsudha.com/cicd-for-terraform-with-github-actions-deploying-a-nodejs-redis-app-on-aws</link><guid isPermaLink="true">https://blog.praveshsudha.com/cicd-for-terraform-with-github-actions-deploying-a-nodejs-redis-app-on-aws</guid><category><![CDATA[Docker]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Devops]]></category><category><![CDATA[github-actions]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[🌟 Automating Cover Letters with Portia AI: My AgentHack 2025 Journey]]></title><description><![CDATA[<h2 id="heading-introduction">💡 Introduction</h2>
<p>Welcome Devs 👋 to the world of <strong>AI and Automation</strong>.</p>
<p>In this blog, Ill be sharing my experience of building an <strong>AI Agent</strong> that writes personalized cover letters for my freelance gigs  powered by <strong>Portia AI</strong> and <strong>Gemini</strong>. As a full-time freelancer, writing cover letters for every single project often feels repetitive and time-consuming. Thats exactly where this idea was born. The journey of making this project was full of <strong>ebbs and flows</strong>  debugging errors, testing prompts, tweaking workflows  but in the end, it was worth it. The biggest takeaway? 🤔 If you use AI agents wisely, they can <strong>save you countless hours every week</strong> while still keeping your work personal and professional. So without further ado, lets dive into how I built this project, what challenges I faced, and how Portia helped me automate one of the most tiring parts of freelancing: <strong>writing cover letters</strong>.</p>
<hr />
<h2 id="heading-youtube-demo">💡 Youtube Demo</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/mSh3d0BJOB4">https://youtu.be/mSh3d0BJOB4</a></div>
<p> </p>
<hr />
<h2 id="heading-the-problem">💡 The Problem</h2>
<p>As a <strong>full-time freelancer</strong>, one of the most time-consuming parts of my workflow is writing <strong>personalized cover letters</strong>for each new opportunity.</p>
<p>Dont get me wrong  being thoughtful and specific in a proposal is important. It shows effort and increases the chances of landing the gig. But after doing it over and over again, it quickly becomes:</p>
<ul>
<li><p><strong>Repetitive</strong> 🌀</p>
</li>
<li><p><strong>Exhausting</strong> 😮💨</p>
</li>
<li><p>And honestly, <strong>inefficient</strong> </p>
</li>
</ul>
<p>I found myself spending hours every week just writing cover letters  time that could have been used to actually work on projects.</p>
<p>Thats when I realized I needed a smarter solution. A tool that could:</p>
<ul>
<li><p><strong>Read documents</strong> containing the job description along with my portfolio.</p>
</li>
<li><p><strong>Generate a personalized cover letter</strong> based on that information.</p>
</li>
<li><p><strong>Send the cover letter via email</strong> directly to the job poster. And if no email is provided, then simply mail it to myself to keep it safe.</p>
</li>
</ul>
<p>This became the spark for building my own <strong>AI-powered freelancing assistant</strong> 🚀.</p>
<hr />
<h2 id="heading-the-journey">🚀 The Journey</h2>
<p>After deciding to build my <strong>AI-powered Cover Letter Generator</strong>, I rolled up my sleeves and dived into the <strong>Portia AI docs</strong>.</p>
<p>At first, Ill admit  it felt overwhelming. Theres a lot to take in when youre just starting with AI agents. But after tinkering around for a while, I started to get the hang of it.</p>
<p>My first step was creating a <strong>Google API Key</strong> using Google AI Studio for the <strong>Gemini-2.0-flash</strong> model. I thought Id start simple, so I ran a basic <code>2 + 2</code> test. To my surprise it failed. 🙃</p>
<p>The error? Something about missing API keys  even though I had provided them correctly. Thats when I reached out to the <strong>Portia Team on Discord</strong>. And heres the cool part: none other than <strong>Emma Burrows, Co-founder &amp; CTO of Portia</strong>, personally helped me troubleshoot it. Turns out, it wasnt the API at all  it was a <strong>dependency issue</strong>. Once I fixed that, everything clicked into place. That was my first small win .</p>
<p>Next, I wanted to see how Portia works with <strong>Google Apps like Gmail and Calendar</strong>. So I went through their <a target="_blank" href="https://github.com/portiaAI/portia-agent-examples/tree/main/get-started-google-tools">official examples on GitHub</a>. That gave me clarity on how to structure tool usage and start building my own agent.</p>
<p>From there, I built the first version of my program (<code>app.py</code>). It had all the pieces I needed:</p>
<ul>
<li><p>Reading the job description doc + my portfolio doc.</p>
</li>
<li><p>Extracting recipient emails.</p>
</li>
<li><p>Generating a personalized cover letter.</p>
</li>
<li><p>Sending it via Gmail automatically.</p>
</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> dotenv <span class="hljs-keyword">import</span> load_dotenv
<span class="hljs-keyword">import</span> os

<span class="hljs-keyword">from</span> portia <span class="hljs-keyword">import</span> (
    Config,
    DefaultToolRegistry,
    Portia,
    LLMProvider,
    ActionClarification,
    InputClarification,
    MultipleChoiceClarification,
    PlanRunState,
)

load_dotenv(override=<span class="hljs-literal">True</span>)

job_doc_title = os.getenv(<span class="hljs-string">"JOB_DOC_TITLE"</span>, <span class="hljs-string">"Job"</span>).strip()
portfolio_doc_title = os.getenv(<span class="hljs-string">"PORTFOLIO_DOC_TITLE"</span>, <span class="hljs-string">"Portfolio"</span>).strip()
output_doc_title = os.getenv(<span class="hljs-string">"OUTPUT_DOC_TITLE"</span>, <span class="hljs-string">"Cover-letter"</span>).strip()

self_email = os.getenv(<span class="hljs-string">"SELF_EMAIL"</span>, <span class="hljs-string">""</span>).strip()
<span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> self_email:
    self_email = input(<span class="hljs-string">"Enter your fallback email (SELF_EMAIL):\n"</span>).strip()

socials_block = os.getenv(<span class="hljs-string">"SOCIALS"</span>, <span class="hljs-string">""</span>).strip()
<span class="hljs-keyword">if</span> <span class="hljs-keyword">not</span> socials_block:
    parts = []
    <span class="hljs-keyword">for</span> key, label <span class="hljs-keyword">in</span> [
        (<span class="hljs-string">"LINKEDIN_URL"</span>, <span class="hljs-string">"LinkedIn"</span>),
        (<span class="hljs-string">"TWITTER_URL"</span>, <span class="hljs-string">"X/Twitter"</span>),
        (<span class="hljs-string">"YOUTUBE_URL"</span>, <span class="hljs-string">"YouTube"</span>),
        (<span class="hljs-string">"WEBSITE_URL"</span>, <span class="hljs-string">"Website"</span>),
    ]:
        v = os.getenv(key, <span class="hljs-string">""</span>).strip()
        <span class="hljs-keyword">if</span> v:
            parts.append(<span class="hljs-string">f"<span class="hljs-subst">{label}</span>: <span class="hljs-subst">{v}</span>"</span>)
    socials_block = <span class="hljs-string">"\n"</span>.join(parts) <span class="hljs-keyword">if</span> parts <span class="hljs-keyword">else</span> <span class="hljs-string">"(Add your socials here)"</span>

constraints: list[str] = []


<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">task_text</span>() -&gt; str:</span>
    <span class="hljs-string">"""
    We describe *what* to do. Portia will pick the right tools:
    - Google Drive: search files
    - Google Docs: read contents
    - Gmail: draft/send email
    - (Attempt) create a new Google Doc for the cover letter
    """</span>
    <span class="hljs-keyword">return</span> <span class="hljs-string">f"""
You are an expert career coach and outreach copywriter. Use available Google tools to complete the flow below.

Inputs:
- Job doc title: "<span class="hljs-subst">{job_doc_title}</span>"
- Portfolio doc title: "<span class="hljs-subst">{portfolio_doc_title}</span>"
- Output Google Doc title: "<span class="hljs-subst">{output_doc_title}</span>"
- Fallback recipient email (if none in Job doc): "<span class="hljs-subst">{self_email}</span>"
- Socials block (append after sign-off):
<span class="hljs-subst">{socials_block}</span>

Heres what I need you to do:  

1. Locate the Google Doc that matches the job title and the one that matches my portfolio. If either is missing or there are multiple, stop and ask me.

2. Read both docs carefully. From the job doc, identify the role, company, and (if available) the recipient email.
    - If no recipient email is found, use the fallback.

3. Write a tailored cover letter (200350 words).
    - Inputs: ONLY the job doc content, portfolio content
    - Begin with a strong opening.
    - Highlight 23 achievements that connect my skills with the job requirements.
    - Keep it professional, concise, and specific.
    - After the closing and my name, add the socials block.

4. Send the cover letter via Gmail.
    - Inputs: the cover letter text + the recipient email.
    - Subject: Cover letter  Pravesh Sudha  AWS Community Builder
    - Body: Use the same cover letter text, then include the Google Doc link at the end if available. 

Rules:  
- When using tools (Drive, Docs, Gmail), respond only with the correct tool call JSON (no explanations, no markdown).  
- If you need my input (e.g., choosing between multiple docs, missing info), pause and ask.  
- If a Google Doc cannot be created, fallback to emailing me a draft with the letter.  
"""</span>


print(<span class="hljs-string">"\n📋 Generating a plan with Portia ...\n"</span>)

config = Config.from_default(
    llm_provider=LLMProvider.GOOGLE,
    default_model=<span class="hljs-string">"google/gemini-2.0-flash"</span>,  <span class="hljs-comment"># fast + capable for writing</span>
    enforce_schema=<span class="hljs-literal">True</span>,
)

tool_registry = DefaultToolRegistry(config)
portia = Portia(config=config, tools=tool_registry)

plan = portia.plan(task_text())
print(<span class="hljs-string">"Here are the steps in the generated plan:\n"</span>)
print(plan.pretty_print())

<span class="hljs-keyword">while</span> <span class="hljs-literal">True</span>:
    yn = input(<span class="hljs-string">"\nAre you happy with the plan? (y/n):\n"</span>).strip().lower()
    <span class="hljs-keyword">if</span> yn == <span class="hljs-string">"y"</span>:
        <span class="hljs-keyword">break</span>
    extra = input(<span class="hljs-string">"Any additional guidance for the planner?:\n"</span>).strip()
    <span class="hljs-keyword">if</span> extra:
        constraints.append(extra)
    plan = portia.plan(task_text())
    print(<span class="hljs-string">"\nUpdated plan:\n"</span>)
    print(plan.pretty_print())


print(<span class="hljs-string">"\n🚀 Executing the plan ...\n"</span>)
plan_run = portia.run_plan(plan)

<span class="hljs-keyword">while</span> plan_run.state == PlanRunState.NEED_CLARIFICATION:
    <span class="hljs-keyword">for</span> clarification <span class="hljs-keyword">in</span> plan_run.get_outstanding_clarifications():
        <span class="hljs-keyword">if</span> isinstance(clarification, (InputClarification, MultipleChoiceClarification)):
            prompt = clarification.user_guidance <span class="hljs-keyword">or</span> <span class="hljs-string">"Please provide a value"</span>
            <span class="hljs-comment"># Some multiple-choice clarifications expose options</span>
            options = getattr(clarification, <span class="hljs-string">"options"</span>, <span class="hljs-literal">None</span>)
            <span class="hljs-keyword">if</span> options:
                prompt += <span class="hljs-string">"\n"</span> + <span class="hljs-string">"\n"</span>.join(str(o) <span class="hljs-keyword">for</span> o <span class="hljs-keyword">in</span> options)
            user_input = input(prompt + <span class="hljs-string">"\n"</span>)
            plan_run = portia.resolve_clarification(clarification, user_input, plan_run)

        <span class="hljs-keyword">elif</span> isinstance(clarification, ActionClarification):
            <span class="hljs-comment"># Typically OAuth. Youll get a link to click.</span>
            print(<span class="hljs-string">"\n🔐 Authorization required. Open this link to continue:"</span>)
            print(clarification.action_url)
            print(<span class="hljs-string">"Waiting for authorization to complete...\n"</span>)
            plan_run = portia.wait_for_ready(plan_run)


    plan_run = portia.resume(plan_run)


print(<span class="hljs-string">"\n Plan run complete. Raw output below:\n"</span>)
print(plan_run.model_dump_json(indent=<span class="hljs-number">2</span>))
</code></pre>
<p>I saved all my sensitive info in a <code>.env</code> file (API keys, fallback emails, socials), making the setup clean and flexible.</p>
<pre><code class="lang-powershell">PORTIA_API_KEY=<span class="hljs-string">"your-portia-key"</span>
GOOGLE_API_KEY=<span class="hljs-string">"your-gemini-key"</span>

<span class="hljs-comment"># Optional (defaults shown in code)</span>
JOB_DOC_TITLE=<span class="hljs-string">"Job"</span>
PORTFOLIO_DOC_TITLE=<span class="hljs-string">"Portfolio"</span>
OUTPUT_DOC_TITLE=<span class="hljs-string">"Cover-letter"</span>

<span class="hljs-comment"># Email handling</span>
SELF_EMAIL=<span class="hljs-string">"example@gmail.com"</span>   <span class="hljs-comment"># used if Job doc has no email</span>

<span class="hljs-comment"># Socials  either provide a single SOCIALS block or individual links</span>
LINKEDIN_URL=<span class="hljs-string">"Your-Linkedin"</span>
TWITTER_URL=<span class="hljs-string">"Your-Twitter"</span>
YOUTUBE_URL=<span class="hljs-string">"Your-Youtube"</span>
WEBSITE_URL=<span class="hljs-string">"Your-Website"</span>
</code></pre>
<p>Heres the repo if you want to check out the code: 👉 <a target="_blank" href="https://github.com/Pravesh-Sudha/agent-hack-2025">GitHub  agent-hack-2025</a></p>
<p>Of course, the road wasnt smooth. Along the way, I hit <strong>service quota limits</strong>, tried different LLMs like OpenAI, Anthropic, and Mistral  none of them worked seamlessly. Eventually, I generated a <strong>new Google API key</strong> on a different account, which fixed the quota issue.</p>
<p>But then came the real grind: getting the <strong>task prompt right</strong>. Portia is powerful, but it requires clear instructions. I had to <strong>refactor the prompt over 24 times</strong> before the execution flow was smooth. Finally, the magic happened:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756107408877/d1a52be2-74bd-4bdf-af2f-ddc60209cc44.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Portia read the job + portfolio docs,</p>
</li>
<li><p>Generated a crisp cover letter,</p>
</li>
<li><p>And emailed it directly to the client.</p>
</li>
</ul>
<p>Seeing that email land in the inbox felt <strong>dope.</strong> 😎</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756107899166/c6308311-dc4d-4c1c-b089-24e8750bc545.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-how-it-works"> How It Works</h2>
<p>Now that the agent was finally up and running, heres what the flow looks like in action:</p>
<ol>
<li><p><strong>Input Stage  Job &amp; Portfolio Docs</strong> 📂<br /> The agent starts by reading two files:</p>
<ul>
<li><p>A <strong>Job Description</strong> doc (the opportunity I want to apply for).</p>
</li>
<li><p>My <strong>Portfolio</strong> doc (skills, past projects, and relevant achievements).</p>
</li>
</ul>
</li>
</ol>
<p>    This ensures every cover letter is <em>tailored</em>, not generic.</p>
<ol start="2">
<li><p><strong>Processing Stage  Gemini + Portia Magic</strong> </p>
<ul>
<li><p>Portia passes the docs into the <strong>Gemini model</strong>.</p>
</li>
<li><p>Gemini analyzes the job requirements alongside my portfolio.</p>
</li>
<li><p>The output? A <strong>personalized cover letter</strong> that feels thoughtful, not copy-pasted.</p>
</li>
</ul>
</li>
<li><p><strong>Email Handling</strong> 📧</p>
<ul>
<li><p>If the <strong>clients email</strong> is available in the doc, the cover letter gets sent directly to them.</p>
</li>
<li><p>If no email is found, the agent sends the cover letter to <strong>my fallback email (saved in</strong> <code>.env</code>), so I can reuse it later.</p>
</li>
</ul>
</li>
<li><p><strong>Smart Fail-safes</strong> 🛡</p>
<ul>
<li><p>All sensitive data (API keys, fallback emails, socials) are stored securely in <code>.env</code>.</p>
</li>
<li><p>If an email fails, the letter isnt lost  I still get it in my inbox.</p>
</li>
</ul>
</li>
<li><p><strong>Automation in Action</strong> 🚀</p>
<ul>
<li><p>No more copy-pasting job descriptions.</p>
</li>
<li><p>No more staring at a blank page wondering how to start a letter.</p>
</li>
<li><p>Everything is automated: <strong>read  generate  email  done.</strong></p>
</li>
</ul>
</li>
</ol>
<p>Heres a simple visual flow for clarity:</p>
<p><code>Job Doc + Portfolio  Gemini via Portia  Cover Letter  Gmail  Client/Me</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756107709082/e9f14909-e2ec-402e-9a5a-01456dfbbbcb.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756107932796/e344764f-e431-49b8-9329-ba6484ea21cb.png" alt class="image--center mx-auto" /></p>
<p>What used to take me <strong>3040 minutes per job</strong> now takes less than <strong>30 seconds.</strong> Thats the kind of efficiency freelancers <em>dream of</em>.</p>
<h3 id="heading-portfolio-doc">📄 Portfolio Doc</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756107840331/c56b6113-b666-4eb0-be59-27177af3f02a.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-job-doc">📄 Job Doc</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756107858384/d7f68c50-1d84-45d6-b466-105839ca1a21.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-the-impact">💡 The Impact</h2>
<p>This small project ended up creating a <strong>huge impact</strong> on my workflow as a freelancer.</p>
<ul>
<li><p>What once took me <strong>hours every week</strong> now happens in <strong>seconds</strong>.</p>
</li>
<li><p>Instead of wasting energy on repetitive writing, I can focus more on <strong>client work and skill-building</strong>.</p>
</li>
<li><p>The best part? Each cover letter still feels <strong>personalized and human</strong>, not like a bland AI template.</p>
</li>
</ul>
<p>Its not just about saving time  its about <strong>working smarter</strong>. AI agents like this one give freelancers the leverage to scale their efforts without burning out.</p>
<hr />
<h2 id="heading-conclusion">🚀 Conclusion</h2>
<p>Building this project with <strong>Portia AI</strong> and <strong>Gemini</strong> was more than just a hackathon entry  it was a lesson in how AI agents can <strong>redefine productivity for freelancers</strong>.</p>
<p>As a full-time freelancer, I know the hustle of juggling multiple gigs, proposals, and deadlines. With Portia, Ive turned a time-consuming, repetitive task into something <strong>swift, efficient, and stress-free</strong>.</p>
<p>This is just the beginning  AI agents are not here to replace us, but to <strong>empower us</strong>. If we learn how to harness them wisely, they can free us from routine work and let us focus on what truly matters: <strong>creating, solving, and delivering value</strong>.</p>
<p>Thanks for reading my journey! 💙<br />If you found this project inspiring or want to connect, feel free to reach out to me on:</p>
<ul>
<li><p>🌐 Website: <a target="_blank" href="https://praveshsudha.com/">praveshsudha.com</a></p>
</li>
<li><p>💼 LinkedIn: <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha/">linkedin.com/in/pravesh-sudha</a></p>
</li>
<li><p>🐦 Twitter/X: <a target="_blank" href="https://x.com/praveshstwt">x.com/praveshstwt</a></p>
</li>
<li><p>📹 YouTube: <a target="_blank" href="https://www.youtube.com/@pravesh-sudha">@pravesh-sudha</a></p>
</li>
</ul>
]]></description><link>https://blog.praveshsudha.com/automating-cover-letters-with-portia-ai-my-agenthack-2025-journey</link><guid isPermaLink="true">https://blog.praveshsudha.com/automating-cover-letters-with-portia-ai-my-agenthack-2025-journey</guid><category><![CDATA[AgentHack2025 ]]></category><category><![CDATA[Portia]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AI]]></category><category><![CDATA[ai-agent]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[🚀 Deploying a Highly Scalable & Available Django Application on AWS with Terraform]]></title><description><![CDATA[<h2 id="heading-introduction">💡 Introduction</h2>
<p>Welcome, Devs, to the world of <strong>cloud</strong> and <strong>automation</strong>! 🚀</p>
<p>If youve been following my blogs, you might remember that a few months ago I published an <strong>experimental project</strong> where I created an <a target="_blank" href="https://blog.praveshsudha.com/from-local-to-cloud-deploying-a-django-employment-management-app-with-aws-rds"><strong>Employee Management Application</strong></a> using Django and migrated its database to <strong>Amazon RDS</strong>.</p>
<p>Recently, after clearing my <strong>AWS Solutions Architect Associate</strong> exam 🎉, I started reflecting on some of the exam questions around <strong>resilient</strong> and <strong>highly available</strong> architectures. That sparked an idea  why not take my earlier project and <strong>level it up</strong> to meet <strong>production-grade standards</strong>?</p>
<p>So, in this blog, well <strong>upgrade the Employee Management App</strong> and <strong>deploy it on AWS</strong> using <strong>Terraform</strong>. Our end goal? A <strong>highly scalable</strong> and <strong>highly available</strong> architecture, following <strong>real-world market practices</strong>.</p>
<p>Without further ado, lets dive in! 🏗</p>
<hr />
<h2 id="heading-pre-requisites">💡 Pre-Requisites</h2>
<p>Before we jump into building our scalable Django setup, lets make sure youve got the essentials ready. </p>
<ol>
<li><p><strong>AWS Account</strong>  Youll need an AWS account with an <strong>IAM user</strong> that has <strong>AdministratorAccess</strong> permissions <em>(only for the sake of this project  in a production environment, you should always follow the principle of least privilege).</em></p>
</li>
<li><p><strong>AWS CLI Installed</strong>  Download and install the AWS CLI on your local machine.</p>
</li>
<li><p><strong>Configure IAM User</strong>  Run <code>aws configure</code> and provide the IAM users credentials to set up AWS CLI authentication.</p>
</li>
<li><p><strong>Terraform CLI Installed</strong>  Make sure Terraform is installed so we can define and deploy our infrastructure as code.</p>
</li>
</ol>
<p>Once these items are checked off your list, youre all set to start this awesome project. 🚀</p>
<hr />
<h2 id="heading-youtube-demonstration">💡 Youtube Demonstration</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/idUGgFry72k">https://youtu.be/idUGgFry72k</a></div>
<p> </p>
<hr />
<h2 id="heading-setting-up-terraform-remote-backend-amp-secrets">💡 Setting Up Terraform Remote Backend &amp; Secrets</h2>
<p>Since this project follows <strong>best practices</strong> for hosting a <strong>highly scalable</strong> and <strong>highly available</strong> application on AWS, we wont be storing our Terraform state file locally. Instead, well store it remotely to ensure team collaboration, backup, and resilience.</p>
<p>For this setup, Im using <strong>Amazon S3</strong> (with versioning enabled) to store the state file, and <strong>DynamoDB</strong> for state locking. Interestingly, while browsing the <a target="_blank" href="https://developer.hashicorp.com/terraform/language/backend/s3#use_lockfile">AWS Docs</a>, I noticed that the <code>S3 + DynamoDB</code> combination is now deprecated in favor of a <code>use_lockfile</code> tag for S3 state locking. However, for some reason, the new method didnt work on my machine  so I decided to stick with the tried-and-tested <strong>S3 + DynamoDB</strong> approach.</p>
<p>The complete Terraform code for this project is available on my GitHub:<br /><a target="_blank" href="https://github.com/Pravesh-Sudha/terra-projects">https://github.com/Pravesh-Sudha/terra-projects</a></p>
<p>Navigate to the scripts directory:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> terra-projects/two-tier-app/scripts
</code></pre>
<p>First, well create the <strong>S3 bucket</strong> and <strong>DynamoDB table</strong> for our Terraform backend. Ive created a <code>config.sh</code> script for this:</p>
<pre><code class="lang-bash">chmod u+x config.sh
./config.sh
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755269282774/30966873-b55f-4515-bbd8-5813b9118fc0.png" alt class="image--center mx-auto" /></p>
<p>This will create:</p>
<ul>
<li><p>Bucket: <code>pravesh-tf-two-tier-bucket</code></p>
</li>
<li><p>Table: <code>pravesh-state-table</code></p>
</li>
</ul>
<p>If you get an error that the bucket name is already taken, simply edit the name in the <code>config.sh</code> file and try again. Just remember to update the same name inside <code>terra-config/backend.tf</code>.</p>
<p>Next, since well be deploying an RDS instance for our Django app, we need to securely store the database credentials. For this, well use <strong>AWS Secrets Manager</strong>. In the <code>scripts/commands.md</code> file, youll find the exact commands, but heres the process:</p>
<p><strong>Create AWS Secrets for RDS Instance</strong></p>
<pre><code class="lang-bash">aws secretsmanager create-secret \
  --name <span class="hljs-string">"employee-mgnt/rds-credentials"</span> \
  --secret-string <span class="hljs-string">'{"username":"admin","password":"admin1234"}'</span> \
  --region us-east-1
</code></pre>
<p><strong>Create Policy for IAM User</strong></p>
<pre><code class="lang-bash">aws iam create-policy --policy-name TerraformSecretsRead --policy-document <span class="hljs-string">'{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "secretsmanager:GetSecretValue",
        "secretsmanager:DescribeSecret"
      ],
      "Resource": "arn:aws:secretsmanager:us-east-1:&lt;account-id&gt;:secret:employee-mgnt/rds-credentials-*"
    }
  ]
}'</span>
</code></pre>
<p><strong>Attach the Policy to IAM User</strong></p>
<pre><code class="lang-bash">aws iam attach-user-policy --user-name &lt;iam-user-name&gt; --policy-arn arn:aws:iam::&lt;account-id&gt;:policy/TerraformSecretsRead
</code></pre>
<p>Replace <code>&lt;account-id&gt;</code> and <code>&lt;iam-user-name&gt;</code> with your own values before running the commands.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755269318201/943d12e8-d54a-4d1b-ab98-5eb2969cba3d.png" alt class="image--center mx-auto" /></p>
<p>With this, our <strong>Terraform remote backend</strong> and <strong>secure database credentials</strong> are ready, and we can now move on to configuring our infrastructure.</p>
<hr />
<h2 id="heading-understanding-the-terraform-configuration">💡Understanding the Terraform Configuration</h2>
<p>Before we run any commands, lets first understand <strong>what our Terraform code does</strong> and <strong>how it works under the hood</strong>. Ill break down each <code>.tf</code> file so you can see exactly how all the pieces fit together.</p>
<h3 id="heading-providertf"><code>provider.tf</code></h3>
<pre><code class="lang-bash">provider <span class="hljs-string">"aws"</span> {
  region = var.aws_region
}
</code></pre>
<p>Were telling Terraform that our cloud provider is <strong>AWS</strong> and setting the region via the variable <code>aws_region</code> (default: <code>us-east-1</code>).</p>
<h3 id="heading-variablestf"><code>variables.tf</code></h3>
<pre><code class="lang-bash">variable <span class="hljs-string">"aws_region"</span> {
  default = <span class="hljs-string">"us-east-1"</span>
}
variable <span class="hljs-string">"project_name"</span> {
  default = <span class="hljs-string">"employee-mgnt"</span>
}
variable <span class="hljs-string">"vpc_cidr"</span> {
  default = <span class="hljs-string">"10.0.0.0/16"</span>
}
variable <span class="hljs-string">"public_subnet_cidrs"</span> {
  default = [<span class="hljs-string">"10.0.1.0/24"</span>, <span class="hljs-string">"10.0.2.0/24"</span>]
}
variable <span class="hljs-string">"private_subnet_cidrs"</span> {
  default = [<span class="hljs-string">"10.0.11.0/24"</span>, <span class="hljs-string">"10.0.12.0/24"</span>]
}
variable <span class="hljs-string">"instance_type"</span> {
  default = <span class="hljs-string">"t3.micro"</span>
}
</code></pre>
<p>Here we define key parameters like <strong>VPC CIDR range</strong>, <strong>public/private subnets</strong>, <strong>EC2 instance type</strong>, and <strong>project name</strong>.</p>
<h3 id="heading-backendtf"><code>backend.tf</code></h3>
<pre><code class="lang-bash">terraform {
  backend <span class="hljs-string">"s3"</span> {
    bucket         = <span class="hljs-string">"pravesh-tf-two-tier-bucket"</span>
    key            = <span class="hljs-string">"terraform/terrform.tfstate"</span>
    region         = var.aws_region
    dynamodb_table = <span class="hljs-string">"pravesh-state-table"</span>
    encrypt        = <span class="hljs-literal">true</span>
  }
}
</code></pre>
<p>This is our <strong>Terraform remote backend</strong> configuration. Were storing the state file in <strong>S3</strong> (with encryption) and using <strong>DynamoDB</strong> for state locking.</p>
<h3 id="heading-vpctf"><code>vpc.tf</code></h3>
<p>We use the official <strong>terraform-aws-vpc</strong> module to create:</p>
<ul>
<li><p><strong>VPC</strong> with custom CIDR</p>
</li>
<li><p><strong>Public &amp; private subnets</strong></p>
</li>
<li><p><strong>NAT Gateway</strong> for outbound internet in private subnets</p>
</li>
<li><p><strong>DNS hostnames &amp; support</strong> enabled</p>
</li>
</ul>
<pre><code class="lang-bash">module <span class="hljs-string">"vpc"</span> {
  <span class="hljs-built_in">source</span> = <span class="hljs-string">"terraform-aws-modules/vpc/aws"</span>

  name = var.project_name
  cidr = var.vpc_cidr

  azs             = slice(data.aws_availability_zones.available.names, 0, 2)
  private_subnets = var.private_subnet_cidrs
  public_subnets  = var.public_subnet_cidrs

  enable_nat_gateway   = <span class="hljs-literal">true</span>
  single_nat_gateway   = <span class="hljs-literal">false</span>
  enable_dns_hostnames = <span class="hljs-literal">true</span>
  enable_dns_support   = <span class="hljs-literal">true</span>

  tags = {
    Project = var.project_name
  }
}
</code></pre>
<h3 id="heading-securitygroupstf"><code>security_groups.tf</code></h3>
<p>We define <strong>three security groups</strong>:</p>
<ol>
<li><p><strong>Load Balancer SG</strong>  Allows HTTP (80) &amp; HTTPS (443) from the internet.</p>
</li>
<li><p><strong>App SG</strong>  Allows traffic on port 8000 <em>only from the ALB SG</em>.</p>
</li>
<li><p><strong>RDS SG</strong>  Allows MySQL traffic (3306) <em>only from the App SG</em>.</p>
</li>
</ol>
<p>This creates a secure, layered approach  <strong>only necessary services talk to each other</strong>.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Load-Balancer Security Group</span>
resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"alb_sg"</span> {
  name        = <span class="hljs-string">"<span class="hljs-variable">${var.project_name}</span>-alb_sg"</span>
  description = <span class="hljs-string">"Allow HTTP/HTTPS access from Internet"</span>
  vpc_id      = module.vpc.vpc_id

  tags = {
    Name = <span class="hljs-string">"alb_sg"</span>
  }
}

resource <span class="hljs-string">"aws_vpc_security_group_ingress_rule"</span> <span class="hljs-string">"alb_sg_http_ipv4"</span> {
  security_group_id = aws_security_group.alb_sg.id
  cidr_ipv4         = <span class="hljs-string">"0.0.0.0/0"</span>
  from_port         = 80
  ip_protocol       = <span class="hljs-string">"tcp"</span>
  to_port           = 80
}

resource <span class="hljs-string">"aws_vpc_security_group_ingress_rule"</span> <span class="hljs-string">"alb_sg_https_ipv4"</span> {
  security_group_id = aws_security_group.alb_sg.id
  cidr_ipv4         = <span class="hljs-string">"0.0.0.0/0"</span>
  from_port         = 443
  ip_protocol       = <span class="hljs-string">"tcp"</span>
  to_port           = 443
}

resource <span class="hljs-string">"aws_vpc_security_group_egress_rule"</span> <span class="hljs-string">"alb_sg_egress"</span> {
  security_group_id = aws_security_group.alb_sg.id
  cidr_ipv4         = <span class="hljs-string">"0.0.0.0/0"</span>
  ip_protocol       = <span class="hljs-string">"-1"</span>
}

<span class="hljs-comment"># Frontend (Application) Security Group</span>
resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"app_sg"</span> {
  name        = <span class="hljs-string">"<span class="hljs-variable">${var.project_name}</span>-app_sg"</span>
  description = <span class="hljs-string">"Allow port 8000 access from ALB only"</span>
  vpc_id      = module.vpc.vpc_id

  tags = {
    Name = <span class="hljs-string">"app_sg"</span>
  }
}

resource <span class="hljs-string">"aws_vpc_security_group_ingress_rule"</span> <span class="hljs-string">"app_sg_from_alb"</span> {
  security_group_id            = aws_security_group.app_sg.id
  referenced_security_group_id = aws_security_group.alb_sg.id
  from_port                    = 8000
  ip_protocol                  = <span class="hljs-string">"tcp"</span>
  to_port                      = 8000
}

resource <span class="hljs-string">"aws_vpc_security_group_egress_rule"</span> <span class="hljs-string">"app_sg_to_rds"</span> {
  security_group_id = aws_security_group.app_sg.id
  cidr_ipv4         = <span class="hljs-string">"0.0.0.0/0"</span>
  ip_protocol       = <span class="hljs-string">"-1"</span>
}

<span class="hljs-comment"># Database (Backend) Security Group</span>
resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"rds_sg"</span> {
  name        = <span class="hljs-string">"<span class="hljs-variable">${var.project_name}</span>-rds_sg"</span>
  description = <span class="hljs-string">"Allow port 3306 access from App SG only"</span>
  vpc_id      = module.vpc.vpc_id

  tags = {
    Name = <span class="hljs-string">"rds_sg"</span>
  }
}

resource <span class="hljs-string">"aws_vpc_security_group_ingress_rule"</span> <span class="hljs-string">"rds_sg_from_app"</span> {
  security_group_id            = aws_security_group.rds_sg.id
  referenced_security_group_id = aws_security_group.app_sg.id
  from_port                    = 3306
  ip_protocol                  = <span class="hljs-string">"tcp"</span>
  to_port                      = 3306
}

resource <span class="hljs-string">"aws_vpc_security_group_egress_rule"</span> <span class="hljs-string">"rds_sg_egress"</span> {
  security_group_id = aws_security_group.rds_sg.id
  cidr_ipv4         = <span class="hljs-string">"0.0.0.0/0"</span>
  ip_protocol       = <span class="hljs-string">"-1"</span>
}
</code></pre>
<h3 id="heading-datatf"><code>data.tf</code></h3>
<p>We fetch:</p>
<ul>
<li><p><strong>Available AZs</strong> in the region</p>
</li>
<li><p><strong>Latest Ubuntu AMI</strong></p>
</li>
<li><p><strong>Database credentials</strong> from AWS Secrets Manager</p>
</li>
</ul>
<p>We also define <code>userdata</code> for EC2 bootstrap configuration.</p>
<pre><code class="lang-bash">data <span class="hljs-string">"aws_availability_zones"</span> <span class="hljs-string">"available"</span> {
  state = <span class="hljs-string">"available"</span>
}

data <span class="hljs-string">"aws_ami"</span> <span class="hljs-string">"ubuntu"</span> {
  most_recent = <span class="hljs-literal">true</span>
  owners      = [<span class="hljs-string">"amazon"</span>]
  filter {
    name   = <span class="hljs-string">"name"</span>
    values = [<span class="hljs-string">"ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"</span>]
  }
}

data <span class="hljs-string">"aws_secretsmanager_secret_version"</span> <span class="hljs-string">"rds_credentials"</span> {
  secret_id = <span class="hljs-string">"employee-mgnt/rds-credentials"</span>  
}

locals {
  userdata = templatefile(<span class="hljs-string">"<span class="hljs-variable">${path.module}</span>/userdata.tpl"</span>, {
    DB_NAME     = <span class="hljs-string">"employee_db"</span>
    DB_USER     = jsondecode(data.aws_secretsmanager_secret_version.rds_credentials.secret_string)[<span class="hljs-string">"username"</span>]
    DB_PASSWORD = jsondecode(data.aws_secretsmanager_secret_version.rds_credentials.secret_string)[<span class="hljs-string">"password"</span>]
    DB_PORT     = <span class="hljs-string">"3306"</span>
    DB_HOST = aws_db_instance.rds_instance.endpoint
  })
}
</code></pre>
<h3 id="heading-albtf"><code>alb.tf</code></h3>
<p>Creates:</p>
<ul>
<li><p><strong>Application Load Balancer</strong> in public subnets</p>
</li>
<li><p><strong>Target Group</strong> (port 8000, health check at <code>/health</code>)</p>
</li>
<li><p><strong>Listener</strong> on port 80 forwarding requests to the target group</p>
</li>
</ul>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_lb"</span> <span class="hljs-string">"app_alb"</span> {
  name               = <span class="hljs-string">"<span class="hljs-variable">${var.project_name}</span>-alb"</span>
  internal           = <span class="hljs-literal">false</span>
  load_balancer_type = <span class="hljs-string">"application"</span>
  security_groups    = [aws_security_group.alb_sg.id]
  subnets            = module.vpc.public_subnets
}

resource <span class="hljs-string">"aws_lb_target_group"</span> <span class="hljs-string">"app_tg"</span> {
  name     = <span class="hljs-string">"<span class="hljs-variable">${var.project_name}</span>-tg"</span>
  port     = 8000
  protocol = <span class="hljs-string">"HTTP"</span>
  vpc_id   = module.vpc.vpc_id
  health_check {
    path                = <span class="hljs-string">"/health"</span>
    protocol            = <span class="hljs-string">"HTTP"</span>
    matcher             = <span class="hljs-string">"200"</span>
    healthy_threshold   = 2
    unhealthy_threshold = 3
    interval            = 30
    timeout             = 5
  }
}

resource <span class="hljs-string">"aws_lb_listener"</span> <span class="hljs-string">"app_listener"</span> {
  load_balancer_arn = aws_lb.app_alb.arn
  port              = <span class="hljs-string">"80"</span>
  protocol          = <span class="hljs-string">"HTTP"</span>

  default_action {
    <span class="hljs-built_in">type</span>             = <span class="hljs-string">"forward"</span>
    target_group_arn = aws_lb_target_group.app_tg.arn
  }
}
</code></pre>
<h3 id="heading-iamtf"><code>iam.tf</code></h3>
<p>Sets up:</p>
<ul>
<li><p><strong>IAM Role</strong> for EC2</p>
</li>
<li><p><strong>SSM Policy</strong> attachment (for remote management)</p>
</li>
<li><p><strong>Instance Profile</strong> for EC2 instances</p>
</li>
</ul>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_iam_role"</span> <span class="hljs-string">"ec2-role"</span> {
  name = <span class="hljs-string">"ec2-role"</span>

  assume_role_policy = jsonencode({
    Version = <span class="hljs-string">"2012-10-17"</span>
    Statement = [
      {
        Action = <span class="hljs-string">"sts:AssumeRole"</span>
        Effect = <span class="hljs-string">"Allow"</span>
        Principal = {
          Service = <span class="hljs-string">"ec2.amazonaws.com"</span>
        }
      },
    ]
  })
}

resource <span class="hljs-string">"aws_iam_role_policy_attachment"</span> <span class="hljs-string">"ssm_policy"</span> {
  role       = aws_iam_role.ec2-role.name
  policy_arn = <span class="hljs-string">"arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"</span>
}

resource <span class="hljs-string">"aws_iam_instance_profile"</span> <span class="hljs-string">"instance_profile"</span> {
  name = <span class="hljs-string">"<span class="hljs-variable">${var.project_name}</span>-instance_profile"</span>
  role = aws_iam_role.ec2-role.name
}
</code></pre>
<h3 id="heading-asglaunchtemplatetf"><code>asg_launch_template.tf</code></h3>
<p>Defines:</p>
<ul>
<li><p><strong>Launch Template</strong> with Ubuntu AMI, instance type, security group, and bootstrap script</p>
</li>
<li><p><strong>Auto Scaling Group</strong> across private subnets with ALB health checks</p>
</li>
</ul>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_launch_template"</span> <span class="hljs-string">"lt"</span> {
  name_prefix   = <span class="hljs-string">"<span class="hljs-variable">${var.project_name}</span>-lt-"</span>
  image_id      = data.aws_ami.ubuntu.id
  instance_type = var.instance_type

  iam_instance_profile {
    name = aws_iam_instance_profile.instance_profile.name
  }

  network_interfaces {
    security_groups             = [aws_security_group.app_sg.id]
    associate_public_ip_address = <span class="hljs-literal">false</span>
  }

  user_data = base64encode(local.userdata)

}

resource <span class="hljs-string">"aws_autoscaling_group"</span> <span class="hljs-string">"app-ag"</span> {
  name                = <span class="hljs-string">"<span class="hljs-variable">${var.project_name}</span>-ag"</span>
  max_size            = 2
  min_size            = 1
  desired_capacity    = 1
  vpc_zone_identifier = module.vpc.private_subnets

  launch_template {
    id      = aws_launch_template.lt.id
    version = <span class="hljs-string">"<span class="hljs-variable">$Latest</span>"</span>
  }
  target_group_arns = [aws_lb_target_group.app_tg.arn]

  <span class="hljs-comment"># Tie ASG health to ALB checks</span>
  health_check_type         = <span class="hljs-string">"ELB"</span>
  health_check_grace_period = 120

  tag {
    key                 = <span class="hljs-string">"Name"</span>
    value               = <span class="hljs-string">"<span class="hljs-variable">${var.project_name}</span>-app"</span>
    propagate_at_launch = <span class="hljs-literal">true</span>
  }
}
</code></pre>
<h3 id="heading-userdatatpl"><code>userdata.tpl</code></h3>
<p>Bootstrap script that:</p>
<ul>
<li><p>Installs dependencies</p>
</li>
<li><p>Fetches DB credentials from Secrets Manager</p>
</li>
<li><p>Creates the database if it doesnt exist</p>
</li>
<li><p>Clones the <strong>Employee Management</strong> Django app from GitHub</p>
</li>
<li><p>Runs migrations</p>
</li>
<li><p>Starts the app with <strong>Gunicorn</strong> on port 8000</p>
</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-built_in">set</span> -e
<span class="hljs-built_in">export</span> DEBIAN_FRONTEND=noninteractive

<span class="hljs-comment"># basic deps</span>
apt-get update -y
apt-get install -y git python3 python3-pip python3-venv mysql-client libmysqlclient-dev build-essential pkg-config

<span class="hljs-comment"># export DB env variables (inherited by processes started below)</span>
<span class="hljs-built_in">export</span> DB_NAME=<span class="hljs-string">"<span class="hljs-variable">${DB_NAME}</span>"</span>
<span class="hljs-built_in">export</span> DB_USER=<span class="hljs-string">"<span class="hljs-variable">${DB_USER}</span>"</span>
<span class="hljs-built_in">export</span> DB_PASSWORD=<span class="hljs-string">"<span class="hljs-variable">${DB_PASSWORD}</span>"</span>
<span class="hljs-built_in">export</span> DB_HOST=<span class="hljs-string">"<span class="hljs-variable">${DB_HOST}</span>"</span>
<span class="hljs-built_in">export</span> DB_PORT=<span class="hljs-string">"3306"</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"DB_NAME=<span class="hljs-variable">$DB_NAME</span>"</span> &gt;&gt; /home/ubuntu/userdata.log
<span class="hljs-built_in">echo</span> <span class="hljs-string">"DB_USER=<span class="hljs-variable">$DB_USER</span>"</span> &gt;&gt; /home/ubuntu/userdata.log
<span class="hljs-built_in">echo</span> <span class="hljs-string">"DB_PASSWORD=<span class="hljs-variable">$DB_PASSWORD</span>"</span> &gt;&gt; /home/ubuntu/userdata.log
<span class="hljs-built_in">echo</span> <span class="hljs-string">"DB_HOST=<span class="hljs-variable">$DB_HOST</span>"</span> &gt;&gt; /home/ubuntu/userdata.log
<span class="hljs-built_in">echo</span> <span class="hljs-string">"DB_PORT=<span class="hljs-variable">$DB_PORT</span>"</span> &gt;&gt; /home/ubuntu/userdata.log

<span class="hljs-comment"># Create database if it doesn't exist</span>
mysql -h <span class="hljs-string">"<span class="hljs-subst">$(echo ${DB_HOST} | cut -d : -f1)</span>"</span> -u <span class="hljs-string">"<span class="hljs-variable">${DB_USER}</span>"</span> -p<span class="hljs-string">"<span class="hljs-variable">${DB_PASSWORD}</span>"</span> -P <span class="hljs-string">"<span class="hljs-variable">${DB_PORT}</span>"</span> -e <span class="hljs-string">"CREATE DATABASE IF NOT EXISTS <span class="hljs-variable">${DB_NAME}</span>;"</span> 2&gt;&gt; /home/ubuntu/userdata.log

<span class="hljs-comment"># Remove existing /home/ubuntu/app directory if it exists</span>
<span class="hljs-keyword">if</span> [ -d <span class="hljs-string">"/home/ubuntu/app"</span> ]; <span class="hljs-keyword">then</span>
  rm -rf /home/ubuntu/app
<span class="hljs-keyword">fi</span>

<span class="hljs-comment"># Clone &amp; install app</span>
git <span class="hljs-built_in">clone</span> https://github.com/Pravesh-Sudha/employee_management.git /home/ubuntu/app 2&gt;&gt; /home/ubuntu/userdata.log
<span class="hljs-built_in">cd</span> /home/ubuntu/app
python3 -m venv venv
<span class="hljs-built_in">source</span> venv/bin/activate
pip install -r requirements.txt 2&gt;&gt; /home/ubuntu/userdata.log

chown -R ubuntu:ubuntu /home/ubuntu/app


<span class="hljs-comment"># Small randomized sleep to reduce concurrent migrations</span>
sleep $((RANDOM % <span class="hljs-number">10</span>))

<span class="hljs-comment"># Retry migrations (5 attempts)</span>
attempt=0
until [ <span class="hljs-variable">$attempt</span> -ge 5 ]
<span class="hljs-keyword">do</span>
  attempt=$((attempt+<span class="hljs-number">1</span>))
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"Running migrations (attempt <span class="hljs-variable">$attempt</span>)..."</span> tee -a /home/ubuntu/userdata.log
  python manage.py migrate --noinput &gt;&gt; /home/ubuntu/migrate.log 2&gt;&amp;1 &amp;&amp; <span class="hljs-built_in">break</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"Migrate failed, retrying in 5s..."</span> | tee -a /home/ubuntu/userdata.log
  sleep 5
<span class="hljs-keyword">done</span>

<span class="hljs-comment"># start gunicorn</span>
nohup /home/ubuntu/app/venv/bin/gunicorn --workers 2 --timeout 60 --access-logfile /home/ubuntu/gunicorn-access.log --error-logfile /home/ubuntu/gunicorn-error.log employee_management.wsgi:application --<span class="hljs-built_in">bind</span> 0.0.0.0:8000 &amp;

<span class="hljs-comment"># Done</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"user-data finished"</span>
</code></pre>
<h3 id="heading-outputstf"><code>outputs.tf</code></h3>
<p>Displays:</p>
<ul>
<li><p><strong>ALB DNS Name</strong>  The public URL of our app</p>
</li>
<li><p><strong>RDS Endpoint</strong>  The database connection string</p>
</li>
</ul>
<pre><code class="lang-bash">output <span class="hljs-string">"alb_dns_name"</span> {
  description = <span class="hljs-string">"DNS name of the Application Load Balancer"</span>
  value       = aws_lb.app_alb.dns_name
}


output <span class="hljs-string">"rds_endpoint"</span> {
  description = <span class="hljs-string">"Endpoint of the RDS instance"</span>
  value       = aws_db_instance.rds_instance.endpoint
}
</code></pre>
<h3 id="heading-running-the-terraform-code">Running the Terraform Code</h3>
<p>Now that we understand the structure, lets run it:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> terra-projects/two-tier-app/terra-config
terraform init
terraform plan
terraform apply --auto-approve
</code></pre>
<p>Provisioning will take <strong>1015 minutes</strong>, so grab a coffee  while AWS spins up your infrastructure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755277920880/84bbbfb2-2b2d-4b03-9e6e-9591428ce9ca.png" alt class="image--center mx-auto" /></p>
<p>Once the architecture is ready, open the <code>alb_dns_name</code> in your web browser  youll see the Django Employee Management application up and running.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755278291697/de6028db-0083-45f2-b021-3f73f6de65a7.png" alt class="image--center mx-auto" /></p>
<p>You can now:</p>
<ul>
<li><p>Add employee information and save it.</p>
</li>
<li><p>Refresh the browser to confirm the data persists, proving its connected to the RDS MySQL database.</p>
</li>
<li><p>Check the <strong>Target Groups</strong> in the Load Balancer console to see your healthy instances in action.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755278307957/bfc22369-6b6c-4815-85ab-a4d49d7079d8.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>With this setup, weve successfully deployed a <strong>highly available, scalable, and secure</strong> Django Employee Management application on AWS, with credentials securely stored in <strong>AWS Secrets Manager</strong>.</p>
<p>You can check the application logs by SSM session</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755278413528/bc9f6425-0632-44a3-a304-a3a5f8354df6.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-clean-up-to-avoid-charges"><strong>Clean-Up to Avoid Charges:</strong></h3>
<p>After experimenting, run the following command to delete all AWS resources:</p>
<pre><code class="lang-bash">terraform destroy --auto-approve
</code></pre>
<p>To remove the S3 bucket and DynamoDB table used for Terraform state, navigate to the <code>scripts</code> directory and run:</p>
<pre><code class="lang-bash">chmod u+x delete.sh
./delete.sh
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755278476580/8757b729-251f-4097-89c9-63c21c9f4792.png" alt class="image--center mx-auto" /></p>
<p>This ensures all infrastructure is cleaned up, preventing unnecessary costs.</p>
<hr />
<h2 id="heading-conclusion"><strong>💡 Conclusion</strong></h2>
<p>In this project, we successfully deployed a <strong>Django Employee Management Application</strong> on AWS using a highly available, scalable, and secure architecture. By integrating <strong>ALB, EC2 Auto Scaling, RDS MySQL, Secrets Manager</strong>, and an <strong>S3 + DynamoDB backend for Terraform state</strong>, we created a production-ready setup that can handle growth while keeping credentials safe.</p>
<p>We also explored how the <strong>application persists data in RDS</strong>, verified healthy instances via <strong>Target Groups</strong>, and ensured cost optimization by cleaning up resources with <code>terraform destroy</code> and custom scripts. This hands-on project not only strengthens your AWS and Terraform skills but also gives you a blueprint for hosting real-world applications in the cloud.</p>
<p>If you found this guide helpful, dont forget to connect with me and explore more of my work:</p>
<ul>
<li><p>🌐 <strong>Website:</strong> <a target="_blank" href="https://praveshsudha.com/">praveshsudha.com</a></p>
</li>
<li><p> <strong>Blog:</strong> <a target="_blank" href="https://praveshsudha.com">blog.praveshsudha.com</a></p>
</li>
<li><p>💼 <strong>LinkedIn:</strong> <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha">linkedin.com/in/pravesh-sudha</a></p>
</li>
<li><p>🐦 <strong>Twitter/X:</strong> <a target="_blank" href="https://x.com/praveshstwt">x.com/PraveshStwt</a></p>
</li>
<li><p>📺 <strong>YouTube:</strong> <a target="_blank" href="https://youtube.com/@pravesh-sudha">Pravesh-Sudha</a></p>
</li>
</ul>
]]></description><link>https://blog.praveshsudha.com/deploying-a-highly-scalable-and-available-django-application-on-aws-with-terraform</link><guid isPermaLink="true">https://blog.praveshsudha.com/deploying-a-highly-scalable-and-available-django-application-on-aws-with-terraform</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS RDS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Devops]]></category><category><![CDATA[vpc]]></category><category><![CDATA[Django]]></category><category><![CDATA[Python]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[🚀 My Journey to Passing the AWS Solutions Architect Associate Exam]]></title><description><![CDATA[<h2 id="heading-introduction">💡 Introduction</h2>
<p>Welcome to the world of cloud computing and certifications!<br />In this blog, Im excited to share my journey of preparing for and clearing the <strong>AWS Certified Solutions Architect  Associate (SAA-C03)</strong> exam. Ill walk you through the strategies, resources, and practical steps that helped me succeed  insights that I believe can serve as a helpful guide for anyone aiming to strengthen their AWS skills and gain hands-on experience with core services.</p>
<p>Whether youre just starting with AWS or looking to validate your cloud knowledge through certification, I hope this post will offer practical direction and boost your confidence.<br />So, without further ado, lets dive in.</p>
<hr />
<h2 id="heading-youtube-demonstration">💡 Youtube Demonstration</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/K2WHXgU-Km0">https://youtu.be/K2WHXgU-Km0</a></div>
<p> </p>
<hr />
<h2 id="heading-what-is-the-aws-solutions-architect-associate-saa-c03-certification">💡 What is the AWS Solutions Architect Associate (SAA-C03) Certification?</h2>
<p>The <strong>AWS Certified Solutions Architect  Associate (SAA-C03)</strong> is a <strong>mid-level certification</strong> designed for individuals who want to demonstrate their ability to design well-architected, cost-effective, scalable, and secure solutions on AWS. It's ideal for candidates with <strong>prior cloud knowledge</strong> or <strong>strong on-premises IT experience</strong> looking to validate their architecture skills in the AWS ecosystem.</p>
<p>One of the key advantages of this certification is that it <strong>doesn't require deep programming expertise</strong>. However, having a working knowledge of programming concepts (especially related to APIs, automation, and infrastructure-as-code) can certainly help during both the preparation and the exam.</p>
<p>This certification validates your understanding of a broad range of AWS services and how to design architectures that meet specific business and technical requirements. In short, it positions you as someone capable of <strong>designing reliable cloud environments at scale</strong>.</p>
<blockquote>
<p>💡 <strong>Before diving into your preparation, I highly recommend reading the official exam guide:</strong><br /><a target="_blank" href="https://d1.awsstatic.com/onedam/marketing-channels/website/aws/en_US/certification/approved/pdfs/docs-sa-assoc/AWS-Certified-Solutions-Architect-Associate_Exam-Guide.pdf">AWS SAA-C03 Exam Guide PDF</a></p>
</blockquote>
<h3 id="heading-exam-overview">📘 Exam Overview</h3>
<ul>
<li><p><strong>Number of Questions:</strong> 65<br />  <em>(50 scored + 15 unscored experimental questions  all are presented equally, so aim to answer all 65)</em></p>
</li>
<li><p><strong>Question Types:</strong> Multiple choice and multiple response</p>
</li>
<li><p><strong>Duration:</strong> 130 minutes (You can submit early)</p>
</li>
<li><p><strong>Cost:</strong> $150 + taxes</p>
</li>
<li><p><strong>Passing Score:</strong> 720/1000</p>
</li>
</ul>
<h3 id="heading-exam-domains-breakdown">🧠 Exam Domains Breakdown</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Domain</td><td>Focus Area</td><td>Weight</td></tr>
</thead>
<tbody>
<tr>
<td><strong>1</strong></td><td>Design Secure Architectures</td><td>30%</td></tr>
<tr>
<td><strong>2</strong></td><td>Design Resilient Architectures</td><td>26%</td></tr>
<tr>
<td><strong>3</strong></td><td>Design High-Performing Architectures</td><td>24%</td></tr>
<tr>
<td><strong>4</strong></td><td>Design Cost-Optimized Architectures</td><td>20%</td></tr>
</tbody>
</table>
</div><p>Each domain is deeply tied to AWS best practices, the <strong>Well-Architected Framework</strong>, and real-world solution design scenarios.</p>
<hr />
<h2 id="heading-my-preparation-journey-from-community-builder-to-certified-architect">💡 My Preparation Journey: From Community Builder to Certified Architect</h2>
<p>My certification journey began with a moment of gratitude and opportunity.</p>
<p>In <strong>March 2025</strong>, I was honored to be selected as an <strong>AWS Community Builder</strong> under the <strong>Containers</strong> category. One of the perks of this program is a <strong>100% discount voucher</strong> for any AWS Certification examan amazing benefit AWS offers its builders every year. I knew I wanted to use it wisely.</p>
<p>By the end of <strong>June</strong>, I made up my mind to take on the <strong>AWS Solutions Architect Associate</strong> exam. Since this was going to be my <strong>first cloud certification</strong>, I wanted something both <strong>challenging and meaningful</strong>something that would push me to understand AWS at a deeper architectural level. And SAA-C03 was exactly that.</p>
<h3 id="heading-getting-started-the-right-learning-path">📚 Getting Started: The Right Learning Path</h3>
<p>I kicked off my preparation with <strong>Andrew Browns 50-hour AWS Solutions Architect Associate course</strong> on FreeCodeCamps YouTube channel:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/c3Cn4xYfxJY?si=vw8Szr5ZCHFuSsAY">https://youtu.be/c3Cn4xYfxJY?si=vw8Szr5ZCHFuSsAY</a></div>
<p> </p>
<p>This course turned out to be <strong>a goldmine</strong>. It covered all the essential AWS services<strong>EC2, S3, VPC, RDS, IAM, CloudFront</strong>, and many othersalong with real-world architecture use cases and hands-on labs. I dedicated <strong>two hours every day</strong>, and in about <strong>25 days</strong>, I was able to complete the entire course. I then spent <strong>two additional days revisiting my notes</strong>, reinforcing the concepts and focusing on my weak areas.</p>
<h3 id="heading-no-mocks-heres-what-i-did-instead"> No Mocks? Here's What I Did Instead</h3>
<p>Now, heres an important point:</p>
<blockquote>
<p>I did <strong>not</strong> take any full-length <strong>mock tests</strong> during my preparation.</p>
</blockquote>
<p>That said, I <strong>strongly recommend</strong> that you do! Mock tests are incredibly useful for identifying gaps in your understanding and getting familiar with the exam pattern.</p>
<p>Instead, I practiced with over <strong>350 questions</strong> from this curated <strong>YouTube playlist</strong>:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtube.com/playlist?list=PLviC8AFqAj5Don8_nHu1ELghHLbL8hByM&amp;si=UfNBNx-yge11kAXv">https://youtube.com/playlist?list=PLviC8AFqAj5Don8_nHu1ELghHLbL8hByM&amp;si=UfNBNx-yge11kAXv</a></div>
<p> </p>
<p>These questions were aligned with the topics covered in the exam and gave me enough confidence to evaluate my readiness.</p>
<h3 id="heading-bridging-the-gaps-with-tutorials-dojo">🔍 Bridging the Gaps with Tutorials Dojo</h3>
<p>Even after all that practice, there were still a few services that seemed <strong>overlapping in functionality</strong>, and I wasnt entirely sure when to use which. Thats when I turned to one of the most reliable resources in the AWS certification world: <strong>Tutorials Dojo</strong>.</p>
<p>Their <strong>Cheat Sheets</strong> helped me quickly compare services, understand real-life use cases, and sharpen my differentiation skills between similar offerings like <strong>S3 vs. EFS vs. FSx</strong>, or <strong>ALB vs. NLB vs. Gateway Load Balancer</strong>.</p>
<p>🧠 <a target="_blank" href="https://tutorialsdojo.com/">Visit Tutorials Dojo</a>  Highly recommended for last-minute revision and concept clarity.</p>
<p>Here is my CheatSheet:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754393212198/37ceadee-114e-4a7d-8b67-94a392f37c6b.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-the-final-step-booking-the-exam">🗓 The Final Step: Booking the Exam</h3>
<p>On <strong>August 2nd</strong>, feeling confident and well-prepared, I used my AWS-provided voucher and <strong>scheduled the exam for the very next daySunday, August 3rd</strong>.</p>
<p>With solid prep behind me and a calm mindset, I was ready to put my knowledge to the test.</p>
<hr />
<h2 id="heading-exam-day-nerves-questions-and-a-surprise-ending">💡 Exam Day: Nerves, Questions, and a Surprise Ending</h2>
<p>The big day had finally arrived  <strong>Sunday, August 3rd</strong>. My AWS Solutions Architect Associate exam was scheduled from <strong>12:30 PM to 3:00 PM</strong>. I had already set up <strong>Pearson VUEs OnVUE application</strong> the night before, so by <strong>12:00 PM</strong>, I logged in, completed the <strong>identity verification</strong> and <strong>system checks</strong>, and waited as the proctor launched my session.</p>
<p>Right on time, at <strong>12:30 PM</strong>, the exam began.</p>
<p>The interface was clean, and I quickly found my rhythm. I worked through all <strong>65 questions</strong>, staying focused and managing time carefully. I made sure to <strong>flag any questions I wasnt fully confident about</strong>, which helped during the review.</p>
<p>With <strong>just five minutes left</strong>, I had completed the exam and went back to <strong>review all the questions</strong> once. Even though I felt I had performed well, <strong>imposter syndrome</strong> started to creep in  that familiar self-doubt whispered that maybe Id misunderstood the questions, or worse, wasted my only voucher.</p>
<p>The exam results werent immediate; AWS mentioned it could take up to <strong>five business days</strong> to receive the score. So, to calm my nerves, I decided to distract myself by watching one of <strong>Kevin Harts comedy specials</strong>  and thankfully, it worked. I was laughing and had finally started to unwind.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754395001672/53030e17-60aa-4141-9755-d4386f713aa9.gif" alt class="image--center mx-auto" /></p>
<p>But then, at around <strong>5:00 PM</strong>, I casually checked my email  and there it was.</p>
<p>The <strong>first email was from Credly</strong>, notifying me that I had earned the badge for <strong>AWS Certified Solutions Architect  Associate</strong>. My heart raced. I immediately logged into the AWS Certification portal using my <strong>Community Builder ID</strong>, and downloaded the results.</p>
<blockquote>
<p>To my surprise and delight, I had scored <strong>800 out of 1000</strong>.</p>
</blockquote>
<p>It was a proud and fulfilling moment. All those hours of study, practice, and note-taking had paid off. Not only had I cleared the exam, but I had done so with a score that validated my understanding and effort.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754393792836/0b5f5f20-fed7-4811-84bc-1297542a9f96.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-whats-next">💡 Whats Next?</h2>
<p>Clearing the <strong>AWS Solutions Architect Associate</strong> exam has not only boosted my confidence but also deepened my understanding of AWS architecture and services. I now feel more prepared to take the next step in my cloud journey.</p>
<p>Up next on my roadmap is the <strong>AWS DevOps Engineer  Professional (DOP-C02)</strong> certification. This aligns perfectly with my passion for <strong>DevOps</strong>, automation, and building scalable, reliable infrastructure using AWS-native tools. Im excited to explore how AWS can empower and streamline modern DevOps workflows  and of course, Ill be sharing my preparation journey with you along the way.</p>
<hr />
<h2 id="heading-conclusion">💡 Conclusion</h2>
<p>I hope this blog gave you a clear and practical roadmap to prepare for the AWS Solutions Architect Associate exam. Whether youre just starting out or looking for motivation to get back on track, remember  consistency and the right resources make all the difference.</p>
<p>If you found this helpful, please <strong>give this blog a thumbs-up</strong>, <strong>share it with your network</strong>, and consider <strong>subscribing to my newsletter</strong> for more AWS, DevOps, and cloud-related content.</p>
<p>Feel free to connect with me here:<br />🔗 <strong>Website</strong>: <a target="_blank" href="https://praveshsudha.com/">praveshsudha.com</a><br />📝 <strong>Blog</strong>: <a target="_blank" href="https://blog.praveshsudha.com/">blog.praveshsudha.com</a><br />🐦 <strong>Twitter</strong>: <a target="_blank" href="https://x.com/praveshstwt">praveshstwt</a><br />💼 <strong>LinkedIn</strong>: <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha">Pravesh Sudha</a><br />📺 <strong>YouTube</strong>: <a target="_blank" href="https://youtube.com/@pravesh-sudha">Pravesh Sudha</a></p>
<p>Thanks for reading, and best of luck on your AWS certification journey! 🚀</p>
]]></description><link>https://blog.praveshsudha.com/my-journey-to-passing-the-aws-solutions-architect-associate-exam</link><guid isPermaLink="true">https://blog.praveshsudha.com/my-journey-to-passing-the-aws-solutions-architect-associate-exam</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS Certified Solutions Architect Associate]]></category><category><![CDATA[Programming Tips]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[🚀 Where Should You Store Terraform State Files for Maximum Efficiency?]]></title><description><![CDATA[<h2 id="heading-introduction">💡 Introduction</h2>
<p>Welcome to the world of Cloud and Infrastructure as Code (IaC)! If you're building infrastructure with Terraform  one of the most popular tools in the DevOps ecosystem  youve probably come across the mysterious <code>terraform.tfstate</code>file. This small yet crucial file is the backbone of how Terraform tracks infrastructure resources.</p>
<p>In this blog, well dive into:</p>
<ul>
<li><p>What the <code>terraform.tfstate</code> file is</p>
</li>
<li><p>Why relying on a <strong>local state file</strong> can cause issues</p>
</li>
<li><p>And how to properly <strong>store Terraform state remotely</strong> using <strong>AWS S3</strong> and <strong>DynamoDB</strong></p>
</li>
</ul>
<p>Whether you're a student exploring Terraform or a cloud enthusiast looking to follow best practices, this guide will help you understand how state management works and how to secure and scale your infrastructure properly.</p>
<p>So without further ado, lets get started! 🚀</p>
<hr />
<h2 id="heading-prerequisites">💡 Prerequisites</h2>
<p>Before we dive into configuring remote state storage in Terraform, ensure you have the following setup ready:</p>
<p> <strong>An AWS Account</strong>  Youll need access to an AWS account with an <strong>IAM user</strong> that has at least <strong>EC2 Full Access, S3FullAccess and DynamoDBFullAccess</strong> permissions. You can grant broader permissions like <code>AdministratorAccess</code> for learning purposes, but its recommended to follow the principle of least privilege in production.</p>
<p> <strong>AWS CLI Installed and Configured</strong>  Make sure youve <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html">installed the AWS CLI</a> and configured it with your IAM credentials using:</p>
<pre><code class="lang-bash">aws configure
</code></pre>
<p> <strong>Terraform Installed</strong>  Download and install Terraform from the <a target="_blank" href="https://www.terraform.io/downloads">official website</a>. Confirm installation by running:</p>
<pre><code class="lang-bash">terraform -v
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753776635272/7bf9afee-8d28-4007-abcc-4cb5c9c2249c.png" alt class="image--center mx-auto" /></p>
<p>Once all these are in place, you're ready to start working with Terraform state files!</p>
<hr />
<h2 id="heading-youtube-video-demo">💡 Youtube Video Demo</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/_pjpx6rsxn4">https://youtu.be/_pjpx6rsxn4</a></div>
<p> </p>
<hr />
<h2 id="heading-what-is-the-terraformtfstate-file">💡 What is the <code>terraform.tfstate</code> File?</h2>
<p>If youve worked with Terraform before and created some resources, you might have noticed that after running <code>terraform init</code>, a file named <code>terraform.tfstate</code> is automatically generated. This file is the <strong>heart of your Terraform project</strong>.</p>
<p>But what exactly does it do?</p>
<p>The <code>terraform.tfstate</code> file maintains a <strong>mapping between the infrastructure code</strong> written in your <code>.tf</code> files and the <strong>actual resources</strong> deployed in your cloud account. It stores the <strong>last known state</strong> of those resources, including important metadata like resource IDs, attributes, dependencies, and more.</p>
<p>Here's why this is so important:</p>
<ul>
<li><p>When you run <code>terraform plan</code>, Terraform reads this file to <strong>compare the desired state</strong> (what's in your code) with the <strong>actual state</strong> (what's currently running in your cloud).</p>
</li>
<li><p>It then shows you the <strong>differences</strong> and determines what actions (create, update, delete) are needed to bring the infrastructure in sync with your code.</p>
</li>
<li><p>When you run <code>terraform apply</code>, Terraform updates the actual infrastructure and then updates the <code>terraform.tfstate</code> file accordingly.</p>
</li>
</ul>
<p>In short, this file is what allows Terraform to <strong>track, manage, and orchestrate changes</strong> to your infrastructure consistently and reliably.</p>
<p>However, theres a catch  especially when it comes to <strong>local state</strong>.</p>
<hr />
<h2 id="heading-problems-with-local-state-file">💡 Problems with Local State File</h2>
<p>While the <code>terraform.tfstate</code> file plays a critical role in managing your infrastructure, <strong>storing it locally comes with serious drawbacks</strong>  especially in team environments or production setups.</p>
<h4 id="heading-1-sensitive-information-exposure">1. <strong>Sensitive Information Exposure</strong></h4>
<p>The state file often contains <strong>sensitive data</strong> like:</p>
<ul>
<li><p>API keys</p>
</li>
<li><p>Passwords</p>
</li>
<li><p>Resource metadata and configuration values</p>
</li>
</ul>
<p>If this file is stored locally and shared improperly (e.g., committed to Git), it can lead to <strong>security risks</strong> and potential breaches.</p>
<h4 id="heading-2-no-single-source-of-truth">2. <strong>No Single Source of Truth</strong></h4>
<p>Lets say two developers are working on the same Terraform project with local state files. If both of them:</p>
<ul>
<li><p>Make changes</p>
</li>
<li><p>Apply them independently</p>
</li>
<li><p>And have different versions of the <code>terraform.tfstate</code> file</p>
</li>
</ul>
<p>...this can lead to <strong>conflicting updates</strong>, resource mismatches, or worse  an endless loop of undoing each other's changes. The result? Chaos and an unreliable infrastructure state.</p>
<h4 id="heading-3-lack-of-collaboration-and-control">3. <strong>Lack of Collaboration and Control</strong></h4>
<p>Local state doesnt support features like:</p>
<ul>
<li><p><strong>State Locking</strong>: Prevents multiple users from making concurrent changes</p>
</li>
<li><p><strong>Versioning</strong>: Track history of infrastructure changes</p>
</li>
<li><p><strong>Secure Sharing</strong>: Centralized access for teams</p>
</li>
<li><p><strong>High Availability</strong>: No risk of losing state due to a lost local machine</p>
</li>
</ul>
<hr />
<h2 id="heading-why-remote-state-is-recommended">💡 Why Remote State is Recommended</h2>
<p>To address these issues, Terraform allows storing state remotely using <strong>backends</strong> like:</p>
<ul>
<li><p><strong>Amazon S3 (with DynamoDB for state locking)</strong></p>
</li>
<li><p><strong>Azure Blob Storage</strong></p>
</li>
<li><p><strong>Google Cloud Storage</strong></p>
</li>
<li><p><strong>HashiCorp Cloud Platform (HCP) Terraform</strong></p>
</li>
</ul>
<p>Remote backends provide:</p>
<ul>
<li><p><strong>Encrypted storage</strong></p>
</li>
<li><p><strong>Automatic versioning</strong></p>
</li>
<li><p><strong>Team-friendly collaboration</strong></p>
</li>
<li><p><strong>State locking</strong> to avoid simultaneous updates</p>
</li>
</ul>
<p>For most real-world projects  especially when working in a team or managing production infrastructure  configuring a <strong>remote backend is not just a best practice, its a necessity</strong>.</p>
<p>Next, lets walk through how to do this using <strong>AWS S3 and DynamoDB</strong>.</p>
<hr />
<h2 id="heading-setting-up-remote-terraform-backend-with-aws-s3-and-dynamodb">💡 Setting Up Remote Terraform Backend with AWS S3 and DynamoDB</h2>
<p>Now that we understand the problems with local state, lets see how to properly configure <strong>remote state storage</strong> using <strong>AWS S3 (for storing the state file)</strong> and <strong>DynamoDB (for state locking)</strong>.</p>
<p>Instead of manually creating the required AWS resources, well automate the setup using a simple bash script.</p>
<h3 id="heading-step-1-automate-backend-resource-creation">🔧 <strong>Step 1: Automate Backend Resource Creation</strong></h3>
<p>Create a new file named <a target="_blank" href="http://config.sh"><code>config.sh</code></a> and paste the following content into it:</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-built_in">set</span> -e

<span class="hljs-comment"># Environment Variables</span>
AWS_REGION=<span class="hljs-string">"us-east-1"</span>
S3_BUCKET_NAME=<span class="hljs-string">"pravesh-terraform-state-bucket-2025"</span>
DYNAMODB_TABLE_NAME=<span class="hljs-string">"terraform-state-lock"</span> 
STATE_KEY=<span class="hljs-string">"terraform/terraform.tfstate"</span> 

<span class="hljs-built_in">echo</span> <span class="hljs-string">"--- Creating AWS Resources for Terraform Backend ---"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>

<span class="hljs-comment"># 1. Create S3 Bucket</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Creating S3 bucket: <span class="hljs-variable">$S3_BUCKET_NAME</span> in region <span class="hljs-variable">$AWS_REGION</span>..."</span>
aws s3api create-bucket \
    --bucket <span class="hljs-string">"<span class="hljs-variable">$S3_BUCKET_NAME</span>"</span> \
    --region <span class="hljs-string">"<span class="hljs-variable">$AWS_REGION</span>"</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Enabling versioning on S3 bucket..."</span>
aws s3api put-bucket-versioning \
    --bucket <span class="hljs-string">"<span class="hljs-variable">$S3_BUCKET_NAME</span>"</span> \
    --versioning-configuration Status=Enabled

<span class="hljs-built_in">echo</span> <span class="hljs-string">"S3 bucket created and versioning enabled."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>

<span class="hljs-comment"># 2. Create DynamoDB Table for State Locking</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Creating DynamoDB table: <span class="hljs-variable">$DYNAMODB_TABLE_NAME</span>..."</span>
aws dynamodb create-table \
    --table-name <span class="hljs-string">"<span class="hljs-variable">$DYNAMODB_TABLE_NAME</span>"</span> \
    --attribute-definitions AttributeName=LockID,AttributeType=S \
    --key-schema AttributeName=LockID,KeyType=HASH \
    --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
    --region <span class="hljs-string">"<span class="hljs-variable">$AWS_REGION</span>"</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"DynamoDB table created for state locking."</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>
</code></pre>
<p>Make the script executable and run it:</p>
<pre><code class="lang-bash">chmod u+x config.sh
./config.sh
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753791465295/2f06e0cc-196c-4c6e-93ab-6a5ca61fde96.png" alt class="image--center mx-auto" /></p>
<p> This script will create:</p>
<ul>
<li><p>An <strong>S3 bucket</strong> with versioning enabled to store your <code>terraform.tfstate</code></p>
</li>
<li><p>A <strong>DynamoDB table</strong> for locking and preventing concurrent state operations</p>
</li>
</ul>
<h3 id="heading-step-2-create-a-terraform-project-to-provision-an-nginx-server">🚀 Step 2: Create a Terraform Project to Provision an NGINX Server</h3>
<p>Now lets set up a basic Terraform project that provisions an EC2 instance running an NGINX server with a portfolio page.</p>
<p>Create a new directory called <code>basic-terra</code> and inside it, add the following files:</p>
<h4 id="heading-providertfhttpprovidertf">📄 <a target="_blank" href="http://provider.tf"><code>provider.tf</code></a></h4>
<pre><code class="lang-bash">provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-string">"us-east-1"</span>
}
</code></pre>
<h4 id="heading-maintfhttpmaintf">📄 <a target="_blank" href="http://main.tf"><code>main.tf</code></a></h4>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"sg"</span> {
  name        = <span class="hljs-string">"Basic-Security Group"</span>
  description = <span class="hljs-string">"Allow port 80 for HTTP"</span>

  tags = {
    Name = <span class="hljs-string">"Basic-sg"</span>
  }
}

resource <span class="hljs-string">"aws_vpc_security_group_egress_rule"</span> <span class="hljs-string">"example"</span> {
  security_group_id = aws_security_group.sg.id
  cidr_ipv4         = <span class="hljs-string">"0.0.0.0/0"</span>
  ip_protocol       = <span class="hljs-string">"-1"</span>
}

resource <span class="hljs-string">"aws_vpc_security_group_ingress_rule"</span> <span class="hljs-string">"example"</span> {
  security_group_id = aws_security_group.sg.id
  cidr_ipv4         = <span class="hljs-string">"0.0.0.0/0"</span>
  from_port         = 80
  to_port           = 80
  ip_protocol       = <span class="hljs-string">"tcp"</span>
}

resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"web"</span> {
  ami             = <span class="hljs-string">"ami-020cba7c55df1f615"</span>  <span class="hljs-comment"># Use a valid AMI ID for your region</span>
  instance_type   = <span class="hljs-string">"t2.micro"</span>
  security_groups = [aws_security_group.sg.name]
  user_data       = file(<span class="hljs-string">"userdata.sh"</span>)

  tags = {
    Name = <span class="hljs-string">"basic-terra"</span>
  }
}

output <span class="hljs-string">"instance_public_ip"</span> {
  value       = aws_instance.web.public_ip
  description = <span class="hljs-string">"Website is running on this address:"</span>
}
</code></pre>
<h4 id="heading-userdatashhttpuserdatash">📄 <a target="_blank" href="http://userdata.sh"><code>userdata.sh</code></a></h4>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-comment"># Install NGINX</span>
apt update -y
apt install nginx -y

<span class="hljs-comment"># Start NGINX service</span>
systemctl <span class="hljs-built_in">enable</span> nginx
systemctl start nginx

<span class="hljs-comment"># Portfolio HTML</span>
cat &lt;&lt;EOF &gt; /var/www/html/index.html
&lt;!DOCTYPE html&gt;
&lt;html lang=<span class="hljs-string">"en"</span>&gt;
&lt;head&gt;
  &lt;meta charset=<span class="hljs-string">"UTF-8"</span>&gt;
  &lt;title&gt;Pravesh Sudha | Portfolio&lt;/title&gt;
  &lt;style&gt;
    body {
      font-family: Arial, sans-serif;
      background-color: <span class="hljs-comment">#f4f4f4;</span>
      text-align: center;
      padding: 50px;
    }
    h1 { color: <span class="hljs-comment">#333; }</span>
    p  { font-size: 18px; color: <span class="hljs-comment">#666; }</span>
    a  { color: <span class="hljs-comment">#007BFF; text-decoration: none; }</span>
  &lt;/style&gt;
&lt;/head&gt;
&lt;body&gt;
  &lt;h1&gt;Hi, I<span class="hljs-string">'m Pravesh Sudha&lt;/h1&gt;
  &lt;p&gt;DevOps  Cloud  Content Creator&lt;/p&gt;
  &lt;p&gt;
    &lt;a href="https://blog.praveshsudha.com" target="_blank"&gt;Blog&lt;/a&gt; |
    &lt;a href="https://x.com/praveshstwt" target="_blank"&gt;Twitter&lt;/a&gt; |
    &lt;a href="https://www.youtube.com/@pravesh-sudha" target="_blank"&gt;YouTube&lt;/a&gt; |
    &lt;a href="https://www.linkedin.com/in/pravesh-sudha/" target="_blank"&gt;LinkedIn&lt;/a&gt;
  &lt;/p&gt;
&lt;/body&gt;
&lt;/html&gt;
EOF</span>
</code></pre>
<h4 id="heading-backendtfhttpbackendtf">📄 <a target="_blank" href="http://backend.tf"><code>backend.tf</code></a></h4>
<pre><code class="lang-bash">terraform {
  backend <span class="hljs-string">"s3"</span> {
    bucket         = <span class="hljs-string">"pravesh-terraform-state-bucket-2025"</span>
    key            = <span class="hljs-string">"terraform/terraform.tfstate"</span>
    region         = <span class="hljs-string">"us-east-1"</span>
    dynamodb_table = <span class="hljs-string">"terraform-state-lock"</span>
    encrypt        = <span class="hljs-literal">true</span>
  }
}
</code></pre>
<h3 id="heading-step-3-initialize-plan-amp-apply"> Step 3: Initialize, Plan &amp; Apply</h3>
<p>Now inside the <code>basic-terra</code> directory, run the following commands:</p>
<pre><code class="lang-bash">terraform init
terraform plan
terraform apply --auto-approve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753792130792/0f9bcc11-7f9e-448f-add4-5f8541a67ab0.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753792224770/3f016d50-591c-4fc2-b78b-3ef7ce4a684a.png" alt class="image--center mx-auto" /></p>
<p>Once the resources are created, Terraform will output the <strong>public IP</strong> of the EC2 instance. Open it in your browser and youll see your <strong>personal portfolio hosted via NGINX</strong>! 🎉</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753792251992/f27657bc-e9b3-4c7f-98d2-5e79df3ede3e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-check-your-state-file">📦 Check Your State File</h3>
<p>Now visit your S3 console and open the bucket <code>my-terraform-state-bucket-2025</code>. Youll see your <code>terraform.tfstate</code> file stored securely with:</p>
<ul>
<li><p><strong>Encryption</strong> enabled</p>
</li>
<li><p><strong>Versioning</strong> in place</p>
</li>
<li><p><strong>State locking</strong> handled by DynamoDB</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753792585579/e9a6ed37-993f-432b-a51f-9a1188555f35.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753792457902/2d585445-9e18-4699-9f9e-0e00c95b24e2.png" alt class="image--center mx-auto" /></p>
<p>This setup not only makes your state secure but also production-grade and team-ready!</p>
<hr />
<h2 id="heading-cleaning-up-tearing-the-resources-down">🧹 Cleaning Up: Tearing the Resources Down</h2>
<p>Before we wrap up, lets <strong>clean up</strong> all the resources we created to avoid unnecessary AWS charges.</p>
<h3 id="heading-step-1-destroy-terraform-managed-resources">Step 1: Destroy Terraform-managed Resources</h3>
<p>Navigate to your project directory <code>basic-terra</code> and run the following command:</p>
<pre><code class="lang-bash">terraform destroy --auto-approve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753793237728/ade06782-4b95-4759-bb1f-08bafe32606f.png" alt class="image--center mx-auto" /></p>
<p>This will:</p>
<ul>
<li><p>Terminate the EC2 instance</p>
</li>
<li><p>Delete the security group and its associated ingress/egress rules</p>
</li>
</ul>
<h3 id="heading-step-2-delete-the-s3-bucket-and-dynamodb-table">Step 2: Delete the S3 Bucket and DynamoDB Table</h3>
<p>The S3 bucket and DynamoDB table were created manually via the <a target="_blank" href="http://config.sh"><code>config.sh</code></a> script, so well clean them up using another automation script.</p>
<p>Create a file called <a target="_blank" href="http://delete.sh"><code>delete.sh</code></a> and paste the following content:</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

<span class="hljs-built_in">set</span> -e

<span class="hljs-comment"># Environment Variables</span>
AWS_REGION=<span class="hljs-string">"us-east-1"</span>
S3_BUCKET_NAME=<span class="hljs-string">"pravesh-terraform-state-bucket-2025"</span>
DYNAMODB_TABLE_NAME=<span class="hljs-string">"terraform-state-lock"</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"--- Deleting AWS Resources for Terraform Backend ---"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">""</span>

<span class="hljs-comment"># Empty the S3 Bucket (including versions and delete markers)</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Emptying S3 bucket: <span class="hljs-variable">$S3_BUCKET_NAME</span>..."</span>

objects_to_delete=$(aws s3api list-object-versions \
    --bucket <span class="hljs-string">"<span class="hljs-variable">$S3_BUCKET_NAME</span>"</span> \
    --output=json \
    --query=<span class="hljs-string">'{Objects: Versions[].[Key,VersionId],DeleteMarkers:DeleteMarkers[].[Key,VersionId]}'</span> \
    --region <span class="hljs-string">"<span class="hljs-variable">$AWS_REGION</span>"</span>)

<span class="hljs-keyword">if</span> [ <span class="hljs-string">"<span class="hljs-subst">$(echo <span class="hljs-string">"<span class="hljs-variable">$objects_to_delete</span>"</span> | jq '.Objects | length')</span>"</span> -gt 0 ] || \
   [ <span class="hljs-string">"<span class="hljs-subst">$(echo <span class="hljs-string">"<span class="hljs-variable">$objects_to_delete</span>"</span> | jq '.DeleteMarkers | length')</span>"</span> -gt 0 ]; <span class="hljs-keyword">then</span>

    delete_payload=$(<span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$objects_to_delete</span>"</span> | jq -c <span class="hljs-string">'{Objects: (.Objects + .DeleteMarkers | map({Key: .[0], VersionId: .[1]}) | unique)}'</span>)

    aws s3api delete-objects \
        --bucket <span class="hljs-string">"<span class="hljs-variable">$S3_BUCKET_NAME</span>"</span> \
        --delete <span class="hljs-string">"<span class="hljs-variable">$delete_payload</span>"</span> \
        --region <span class="hljs-string">"<span class="hljs-variable">$AWS_REGION</span>"</span>

    <span class="hljs-built_in">echo</span> <span class="hljs-string">"S3 bucket emptied."</span>
<span class="hljs-keyword">else</span>
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"S3 bucket is already empty."</span>
<span class="hljs-keyword">fi</span>

<span class="hljs-comment"># Delete the S3 Bucket</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Deleting S3 bucket..."</span>
aws s3 rb s3://<span class="hljs-string">"<span class="hljs-variable">$S3_BUCKET_NAME</span>"</span> --region <span class="hljs-string">"<span class="hljs-variable">$AWS_REGION</span>"</span> --force
<span class="hljs-built_in">echo</span> <span class="hljs-string">"S3 bucket deleted."</span>

<span class="hljs-comment"># Delete the DynamoDB Table</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Deleting DynamoDB table..."</span>
aws dynamodb delete-table \
    --table-name <span class="hljs-string">"<span class="hljs-variable">$DYNAMODB_TABLE_NAME</span>"</span> \
    --region <span class="hljs-string">"<span class="hljs-variable">$AWS_REGION</span>"</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"DynamoDB table deleted."</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">" Terraform backend resources deleted successfully."</span>
</code></pre>
<p>Make the script executable and run it:</p>
<pre><code class="lang-bash">chmod u+x delete.sh
./delete.sh
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753793245444/c33d13a8-69bd-4583-ae48-58c11dd3a70c.png" alt class="image--center mx-auto" /></p>
<p>This script will:</p>
<ul>
<li><p><strong>Empty the S3 bucket</strong> including all versions and delete markers</p>
</li>
<li><p><strong>Delete the bucket itself</strong></p>
</li>
<li><p><strong>Remove the DynamoDB table</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-conclusion">🏁 Conclusion</h2>
<p>Managing Terraform state properly is not just a best practice  it's essential for building reliable, secure, and scalable infrastructure. In this blog, we explored the importance of the <code>terraform.tfstate</code> file, the risks of keeping it local, and how to overcome those risks by configuring <strong>remote state storage using AWS S3 and DynamoDB</strong>.</p>
<p>We also took a hands-on approach to:</p>
<ul>
<li><p>Automate backend resource creation with a bash script</p>
</li>
<li><p>Deploy an EC2 instance running NGINX to host a simple portfolio</p>
</li>
<li><p>Clean up all resources to avoid charges</p>
</li>
</ul>
<p>Whether youre a beginner exploring Infrastructure as Code or someone working on real-world cloud projects, using remote state and state locking will take your Terraform workflows to the next level. 🌍</p>
<p>If you found this helpful, feel free to connect with me and follow my work:</p>
<ul>
<li><p>🔗 <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha/">LinkedIn</a></p>
</li>
<li><p>🐦 <a target="_blank" href="https://x.com/praveshstwt">Twitter / X</a></p>
</li>
<li><p>📺 <a target="_blank" href="https://www.youtube.com/@pravesh-sudha">YouTube</a></p>
</li>
<li><p> <a target="_blank" href="https://blog.praveshsudha.com/">Blog</a></p>
</li>
</ul>
<p>Thanks for reading, and happy Terraforming! 💻🚀</p>
]]></description><link>https://blog.praveshsudha.com/where-should-you-store-terraform-state-files-for-maximum-efficiency</link><guid isPermaLink="true">https://blog.praveshsudha.com/where-should-you-store-terraform-state-files-for-maximum-efficiency</guid><category><![CDATA[Devops]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item><item><title><![CDATA[How to Deploy a Tetris Game on AWS ECS with Terraform]]></title><description><![CDATA[<h2 id="heading-introduction">💡 Introduction</h2>
<p>Welcome to the world of <strong>cloud computing</strong> and <strong>automation</strong>!<br />In this blog, well dive into one of AWSs core container orchestration services: <strong>Amazon Elastic Container Service (ECS)</strong>.</p>
<p>Well start by understanding what ECS is and how it works. Then, to bring theory into practice, well deploy a <strong>Tetris game</strong> that has been containerized using Docker onto an ECS cluster  all <strong>provisioned and automated with Terraform</strong>.</p>
<p>By the end of this walkthrough, youll gain practical experience in:<br /> Writing Terraform configurations to provision AWS infrastructure<br /> Deploying containerized applications on ECS<br /> Understanding the overall workflow of infrastructure as code on AWS</p>
<p>So, without further ado  <strong>lets get started!</strong></p>
<hr />
<h2 id="heading-pre-requisites">🛠 Pre-requisites</h2>
<p>Before we dive into building and deploying, lets make sure your environment is set up correctly.</p>
<p>To follow along, youll need:</p>
<p> An <strong>AWS account</strong> with an IAM user configured<br /> The IAM user must have sufficient permissions to create and manage ECS, IAM roles, and other resources<br /> <strong>AWS CLI</strong> installed and configured on your system with your Access Key and Secret Access Key</p>
<p>If youre new to configuring the AWS CLI or IAM users, dont worry  Ive explained this step-by-step in another project guide on my blog:<br />👉 <a target="_blank" href="https://blog.praveshsudha.com/">blog.praveshsudha.com</a></p>
<p>Having these prerequisites ready will help you focus directly on writing Terraform code and deploying the application smoothly.</p>
<hr />
<h2 id="heading-youtube-demo">💡 Youtube Demo</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/tVuqBZfU04M">https://youtu.be/tVuqBZfU04M</a></div>
<p> </p>
<h2 id="heading-what-is-ecs">🧩 What is ECS?</h2>
<p><strong>AWS Elastic Container Service (ECS)</strong> is a fully managed container orchestration service that simplifies the deployment, management, and scaling of containerized applications.<br />With ECS, you can easily start, stop, and manage Docker containers running on a cluster of EC2 instances or on a <strong>serverless architecture</strong> using AWS Fargate  all <strong>without worrying about managing the underlying infrastructure</strong>.</p>
<h3 id="heading-how-things-work-with-aws-ecs"> How things work with AWS ECS</h3>
<p>Lets break down the basic workflow of deploying a containerized application on ECS:</p>
<ol>
<li><p><strong>Create an ECS Cluster</strong></p>
<ul>
<li><p>This acts as the logical grouping of your infrastructure.</p>
</li>
<li><p>You can choose the infrastructure type (EC2 instances or serverless Fargate) and configure logging, security, and tags.</p>
</li>
</ul>
</li>
<li><p><strong>Define a Task Definition Family</strong></p>
<ul>
<li><p>Think of this as the <strong>blueprint</strong> for your container.</p>
</li>
<li><p>Here you specify:</p>
<ul>
<li><p>The processor architecture (e.g., AMD64 or ARM64)</p>
</li>
<li><p>The Docker image (public image from Docker Hub, an Amazon ECR repository, etc.)</p>
</li>
<li><p>Container details such as port mappings, container name, and other runtime configurations.</p>
</li>
</ul>
</li>
<li><p>You can also configure <strong>multi-AZ deployments</strong> for high availability.</p>
</li>
</ul>
</li>
<li><p><strong>Create a Service</strong></p>
<ul>
<li><p>Inside your ECS cluster, you create a service that uses your task definition.</p>
</li>
<li><p>The service ensures the desired number of containers are running and handles deployment strategies and scaling.</p>
</li>
</ul>
</li>
</ol>
<p>With this setup, its completely possible to deploy our Dockerized Tetris game using the <strong>AWS Console</strong> (also known as clickops).<br />But wait  <strong>this isnt how things are done in production</strong>.</p>
<p>In production, we use <strong>Infrastructure as Code (IaC)</strong> to provision and manage resources consistently and repeatably.<br />In our case, well use <strong>Terraform</strong> to automate the entire process and deploy the Tetris game with just a single command.</p>
<p>Sounds exciting? Lets continue!</p>
<hr />
<h2 id="heading-local-testing">🧪 Local Testing</h2>
<p>Before we deploy the application to the cloud, its a good practice to <strong>test it locally</strong> to make sure everything runs as expected.</p>
<p>Assuming you have <strong>Docker</strong> installed on your system, you can start the Tetris game container with the following command:</p>
<pre><code class="lang-bash">docker run -d -p 80:80 uzyexe/tetris:latest
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753170522419/b4846b2a-5512-48f0-aa66-93b54e906408.png" alt class="image--center mx-auto" /></p>
<p>Once the container is running, open your browser and navigate to:<br /><a target="_blank" href="http://localhost/"><strong>http://localhost:80</strong></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753170539899/ad9f1b70-4cf9-46d0-8636-fc643b342dcb.png" alt class="image--center mx-auto" /></p>
<p>You should see the <strong>Tetris application</strong> live on your machine!<br />Press <strong>space</strong> to play  and enjoy your very own Dockerized Tetris game.</p>
<p>Testing locally helps ensure that your container image is working correctly before moving on to deployment on AWS ECS.</p>
<hr />
<h2 id="heading-deploying-to-aws-cloud"> Deploying to AWS Cloud</h2>
<p>Now that weve tested our Dockerized Tetris game locally, lets move on to the exciting part  deploying it to <strong>AWS ECS</strong>using <strong>Terraform</strong>.</p>
<p>The complete Terraform code for this project is available on my GitHub repository:<br />👉 <a target="_blank" href="https://github.com/Pravesh-Sudha/tetris-ecs-deploy">https://github.com/Pravesh-Sudha/tetris-ecs-deploy</a></p>
<h3 id="heading-step-1-clone-the-repository">📦 Step 1: Clone the repository</h3>
<p>Start by cloning the repository and moving into the Terraform configuration directory:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/Pravesh-Sudha/tetris-ecs-deploy
<span class="hljs-built_in">cd</span> tetris-ecs-deploy/terra-config
</code></pre>
<h3 id="heading-step-2-configure-terraform-backend">🗄 Step 2: Configure Terraform backend</h3>
<p>Were following Terraform best practices by storing the <strong>Terraform state file</strong> remotely in an S3 bucket.<br />Inside the <code>backend.tf</code> file, youll see this configuration:</p>
<pre><code class="lang-bash">terraform {
  backend <span class="hljs-string">"s3"</span> {
    bucket = <span class="hljs-string">"pravesh-tetris-backend"</span>
    key    = <span class="hljs-string">"ecs-state-file/terraform.tfstate"</span>
    region = <span class="hljs-string">"us-east-1"</span>
  }
}
</code></pre>
<blockquote>
<p>This tells Terraform to store the state file in the <code>pravesh-tetris-backend</code> S3 bucket, keeping it secure and shareable across your team.</p>
</blockquote>
<p>If the bucket doesnt exist yet, create it using the AWS CLI:</p>
<pre><code class="lang-bash">aws s3 mb s3://pravesh-tetris-backend
</code></pre>
<blockquote>
<p>Tip: You can change the bucket name in <code>backend.tf</code> if youd prefer to use a different name.</p>
</blockquote>
<h3 id="heading-step-3-initialize-terraform"> Step 3: Initialize Terraform</h3>
<p>Initialize your Terraform project to install the required providers and configure the backend:</p>
<pre><code class="lang-bash">terraform init
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753171306913/d6a6d15b-fcf3-438f-8ded-3bcab5cf2fa5.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-4-preview-infrastructure-changes">🔍 Step 4: Preview infrastructure changes</h3>
<p>Before applying changes, its always a good idea to see what Terraform is about to do:</p>
<pre><code class="lang-bash">terraform plan
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753171322534/88fa68df-8a9a-4dda-8eef-1e54e02a56a6.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-5-apply-and-create-resources">🚀 Step 5: Apply and create resources</h3>
<p>Finally, deploy everything to AWS:</p>
<pre><code class="lang-bash">terraform apply --auto-approve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753171340801/7b3f8cfb-ba0a-46d4-838f-2325e67c551b.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Within a few minutes, your ECS cluster, task definition, and service will be created, and the Tetris game will go live.</p>
</blockquote>
<h2 id="heading-understanding-the-terraform-code">📝 Understanding the Terraform code</h2>
<p>While the resources are being provisioned, lets briefly walk through what each file does:</p>
<h3 id="heading-valuestf"> <code>values.tf</code></h3>
<p>This file fetches the default VPC and its subnets, then creates a security group that allows inbound HTTP (port 80) traffic from anywhere and allows all outbound traffic.</p>
<pre><code class="lang-bash">data <span class="hljs-string">"aws_vpc"</span> <span class="hljs-string">"default"</span> {
  default = <span class="hljs-literal">true</span>
}

data <span class="hljs-string">"aws_subnets"</span> <span class="hljs-string">"default"</span> {
  filter {
    name   = <span class="hljs-string">"vpc-id"</span>
    values = [data.aws_vpc.default.id]
  }
}

resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"tetris_ecs_sg"</span> {
  name        = <span class="hljs-string">"allow-http"</span>
  description = <span class="hljs-string">"Allow HTTP inbound traffic and all outbound traffic"</span>
  vpc_id      = data.aws_vpc.default.id
  tags = { Name = <span class="hljs-string">"tetris_ecs_sg"</span> }
}

resource <span class="hljs-string">"aws_vpc_security_group_ingress_rule"</span> <span class="hljs-string">"tetris_ecs_sg_ipv4"</span> {
  security_group_id = aws_security_group.tetris_ecs_sg.id
  cidr_ipv4         = <span class="hljs-string">"0.0.0.0/0"</span>
  from_port         = 80
  ip_protocol       = <span class="hljs-string">"tcp"</span>
  to_port           = 80
}

resource <span class="hljs-string">"aws_vpc_security_group_egress_rule"</span> <span class="hljs-string">"allow_all_traffic"</span> {
  security_group_id = aws_security_group.tetris_ecs_sg.id
  cidr_ipv4         = <span class="hljs-string">"0.0.0.0/0"</span>
  ip_protocol       = <span class="hljs-string">"-1"</span>
}
</code></pre>
<p><strong>Explanation:</strong></p>
<ul>
<li><p>Fetch the default VPC and its subnets</p>
</li>
<li><p>Create a security group to allow HTTP traffic so our Tetris game is reachable</p>
</li>
<li><p>Allow all outbound traffic for the containers to access the internet if needed</p>
</li>
</ul>
<h3 id="heading-backendtf">🗄 <code>backend.tf</code></h3>
<p>This file configures Terraform to store its state file in an S3 bucket:</p>
<pre><code class="lang-bash">terraform {
  backend <span class="hljs-string">"s3"</span> {
    bucket = <span class="hljs-string">"pravesh-tetris-backend"</span>
    key    = <span class="hljs-string">"ecs-state-file/terraform.tfstate"</span>
    region = <span class="hljs-string">"us-east-1"</span>
  }
}
</code></pre>
<p>This ensures your state file is safely stored remotely, enabling collaboration and avoiding local state conflicts.</p>
<h3 id="heading-maintf">🛠 <code>main.tf</code></h3>
<p>This is where the core infrastructure is defined:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># ECS Cluster</span>
resource <span class="hljs-string">"aws_ecs_cluster"</span> <span class="hljs-string">"tetris_cluster"</span> {
  name = <span class="hljs-string">"tetris-cluster"</span>
  tags = { Name = <span class="hljs-string">"tetris-cluster"</span> }
}

<span class="hljs-comment"># IAM Role for ECS task execution</span>
resource <span class="hljs-string">"aws_iam_role"</span> <span class="hljs-string">"ecs_task_execution_role"</span> {
  name = <span class="hljs-string">"tetrisEcsTaskExecutionRole"</span>
  assume_role_policy = jsonencode({
    Version = <span class="hljs-string">"2012-10-17"</span>
    Statement = [{
      Action = <span class="hljs-string">"sts:AssumeRole"</span>
      Effect = <span class="hljs-string">"Allow"</span>
      Principal = { Service = <span class="hljs-string">"ecs-tasks.amazonaws.com"</span> }
    }]
  })
}

<span class="hljs-comment"># Attach ECS task execution policy</span>
resource <span class="hljs-string">"aws_iam_role_policy_attachment"</span> <span class="hljs-string">"ecs_task_execution_policy"</span> {
  role       = aws_iam_role.ecs_task_execution_role.name
  policy_arn = <span class="hljs-string">"arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"</span>
}

<span class="hljs-comment"># ECS Task Definition</span>
resource <span class="hljs-string">"aws_ecs_task_definition"</span> <span class="hljs-string">"tetris_task"</span> {
  family                   = <span class="hljs-string">"tetris-task"</span>
  network_mode             = <span class="hljs-string">"awsvpc"</span>
  requires_compatibilities = [<span class="hljs-string">"FARGATE"</span>]
  cpu                      = <span class="hljs-string">"256"</span>
  memory                   = <span class="hljs-string">"512"</span>
  execution_role_arn       = aws_iam_role.ecs_task_execution_role.arn
  container_definitions = jsonencode([
    {
      name      = <span class="hljs-string">"tetris"</span>
      image     = <span class="hljs-string">"uzyexe/tetris:latest"</span>
      essential = <span class="hljs-literal">true</span>
      portMappings = [{
        containerPort = 80
        hostPort      = 80
        protocol      = <span class="hljs-string">"tcp"</span>
      }]
    }
  ])
  tags = { Name = <span class="hljs-string">"tetris-task"</span> }
}

<span class="hljs-comment"># ECS Service</span>
resource <span class="hljs-string">"aws_ecs_service"</span> <span class="hljs-string">"tetris_service"</span> {
  name            = <span class="hljs-string">"tetris-service"</span>
  cluster         = aws_ecs_cluster.tetris_cluster.id
  task_definition = aws_ecs_task_definition.tetris_task.arn
  desired_count   = 1
  launch_type     = <span class="hljs-string">"FARGATE"</span>
  network_configuration {
    subnets          = data.aws_subnets.default.ids
    security_groups  = [aws_security_group.tetris_ecs_sg.id]
    assign_public_ip = <span class="hljs-literal">true</span>
  }
  tags = { Name = <span class="hljs-string">"tetris-service"</span> }
}
</code></pre>
<p><strong>Explanation:</strong></p>
<ul>
<li><p>Create an <strong>ECS cluster</strong> to manage the containers</p>
</li>
<li><p>Define an <strong>IAM role</strong> and attach a policy so ECS can pull container images and write logs</p>
</li>
<li><p>Create an <strong>ECS task definition</strong> that points to the public Tetris Docker image and maps port 80</p>
</li>
<li><p>Launch an <strong>ECS service</strong> on AWS Fargate to keep the container running, connected to our security group and VPC subnets</p>
</li>
</ul>
<h3 id="heading-done"> Done!</h3>
<p>After the apply completes, your Tetris game should be running on ECS!</p>
<hr />
<h2 id="heading-access-the-tetris-game">🎮 Access the Tetris Game</h2>
<p>Once your ECS service is running, its time to see the Tetris game live in action!<br />But how do we get the <strong>public IP</strong> of the running ECS task?</p>
<p>While its technically possible to fetch this IP directly through Terraform and display it as an output, it can get a bit tricky due to the dynamic nature of Fargate tasks and network interfaces.</p>
<p>To keep things simple, Ive added a <strong>bash script</strong> that uses the <strong>AWS CLI</strong> to retrieve the public IP programmatically.</p>
<h3 id="heading-step-1-make-the-script-executable">🧰 Step 1: Make the script executable</h3>
<p>Navigate to the parent directory of the project (<code>tetris-ecs-deploy</code>) and run:</p>
<pre><code class="lang-bash">chmod u+x get_ip.sh
</code></pre>
<h3 id="heading-step-2-run-the-script">🚀 Step 2: Run the script</h3>
<p>Now execute the script to get the public IP of your ECS task:</p>
<pre><code class="lang-bash">./get_ip.sh
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753171885086/947399e8-b6b4-46f3-a99e-038fc5b33106.png" alt class="image--center mx-auto" /></p>
<p>The script will output a link like:</p>
<pre><code class="lang-plaintext">http://&lt;PUBLIC-IP&gt;
</code></pre>
<p>Click on the link (or open it in your browser)  and youll see your <strong>Tetris game running live on AWS ECS!</strong><br />Press <strong>space</strong> to start playing and enjoy your cloud-hosted Tetris.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753171874142/dd180375-c5ac-4a8a-9a33-e411d8ac03f4.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-3-clean-up-resources">🧹 Step 3: Clean up resources</h3>
<p>When youre done exploring the project, its important to <strong>destroy the infrastructure</strong> to avoid unwanted AWS costs.</p>
<p>From the <code>terra-config</code> directory, run:</p>
<pre><code class="lang-bash">terraform destroy --auto-approve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753171862788/e61a9f7a-fbc4-4ab1-8e2c-7d0c8bf24ba1.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p><strong>Tip:</strong> Dont forget to manually delete the S3 bucket you created for the Terraform state file.</p>
</blockquote>
<p>Thats it! Youve now learned how to <strong>deploy a Dockerized Tetris game on AWS ECS using Terraform</strong>, and access it from anywhere.</p>
<hr />
<h2 id="heading-conclusion"> Conclusion</h2>
<p>And thats a wrap! 🎉</p>
<p>In this blog, we explored how to:</p>
<ul>
<li><p>Understand <strong>AWS ECS</strong> and how it works under the hood</p>
</li>
<li><p>Test a Dockerized Tetris game locally</p>
</li>
<li><p>Use <strong>Terraform</strong> to provision AWS infrastructure automatically</p>
</li>
<li><p>Deploy the game on ECS with just a few commands</p>
</li>
<li><p>Access it from anywhere  and clean up afterwards to manage costs</p>
</li>
</ul>
<p>Through this hands-on project, youve seen the power of combining <strong>Infrastructure as Code</strong> with container orchestration in the cloud  skills that are highly valuable in modern DevOps and cloud-native development.</p>
<h3 id="heading-stay-connected">🙌 Stay connected</h3>
<p>If you enjoyed this blog or want to see more real-world DevOps, AWS, and Terraform projects:</p>
<ul>
<li><p>📌 <strong>Blog:</strong> <a target="_blank" href="https://blog.praveshsudha.com/">blog.praveshsudha.com</a></p>
</li>
<li><p>🐦 <strong>Twitter/X:</strong> <a target="_blank" href="https://twitter.com/praveshsudha">@praveshsudha</a></p>
</li>
<li><p>💼 <strong>LinkedIn:</strong> <a target="_blank" href="https://www.linkedin.com/in/pravesh-sudha">Pravesh Sudha</a></p>
</li>
<li><p>📺 <strong>YouTube:</strong> <a target="_blank" href="https://youtube.com/@praveshsudha">Pravesh Sudha</a></p>
</li>
</ul>
<p>Feel free to follow, connect, or drop me a message  I love sharing and discussing cloud projects, open source contributions, and DevOps topics.</p>
<p>Thanks for reading, and happy building on the cloud! 🚀</p>
]]></description><link>https://blog.praveshsudha.com/how-to-deploy-a-tetris-game-on-aws-ecs-with-terraform</link><guid isPermaLink="true">https://blog.praveshsudha.com/how-to-deploy-a-tetris-game-on-aws-ecs-with-terraform</guid><category><![CDATA[Docker]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[ECS]]></category><dc:creator><![CDATA[Pravesh Sudha]]></dc:creator></item></channel></rss>