Pravesh Sudha

Kubernetes for Beginners: Deploying an Nginx–Node–Redis Application

·

10 min read

Cover Image for Kubernetes for Beginners: Deploying an Nginx–Node–Redis Application

Hola Amigos! 👋

Today, we are embarking on a brand new series: K8s with Pravesh 🚀 — where we’ll break down Kubernetes, understand what it really is, and more importantly, how you can actually use it in a practical, no-BS way.

In today’s blog, we’ll dive into the fundamentals — Deployments, Services, and ConfigMaps — and use them to deploy a three-tier application on Minikube.

Now you might be thinking… “What’s new here? There are already thousands of blogs doing the same thing.”

And honestly, you’re not wrong.

But hold your horses for a second 🐎

This isn’t just another “apply this YAML and it works” kind of tutorial. We’re going to:

  • Understand what’s really happening under the hood

  • Debug real issues (yes, the ones that actually happen)

  • And build intuition so you don’t just run Kubernetes… you get it

So let’s dive in. 🔥


🛠️ Pre-Requisites

Before we dive deep, there are a couple of things you need to have set up. Nothing fancy — just the essentials to get your Kubernetes playground up and running.

🔹 Docker / Docker Desktop

We’ll be running Minikube using Docker, so make sure you have Docker installed on your system.

👉 Install it from here: https://docs.docker.com/get-started/get-docker/

🔹 Minikube

Think of Minikube as your personal Kubernetes cluster — lightweight, local, and perfect for experimenting and learning all the cool stuff without needing a cloud setup.

👉 Download it from here: https://minikube.sigs.k8s.io/docs/start/?arch=%2Fmacos%2Farm64%2Fstable%2Fbinary+download


🤔 What is Kubernetes (K8s)?

At its core, Kubernetes is a container orchestration tool.

Now that sounds fancy, but let’s simplify it a bit.

Think of Kubernetes as a Head Chef in a restaurant 👨‍🍳 It makes sure:

  • Everyone is doing their job properly

  • Work is flowing smoothly

  • And if something breaks… it steps in and fixes it

That’s the layman definition.

The Real Meaning

In technical terms, Kubernetes is responsible for:

  • Managing containers

  • Scaling them

  • Ensuring they are always running

  • Handling communication between them

You can think of it as an advanced version of Docker Compose — but built for production-grade systems.

The Smallest Unit: Pod

In Kubernetes, the smallest deployable unit is a Pod.

👉 A Pod is basically a wrapper around your container(s)

  • It can run one or more containers

  • These containers share:

    • Network

    • Storage

But here’s the thing…

Managing Pods manually? 😵‍💫 Not a great idea.

Enter Deployments

To solve that, we have Deployments.

A Deployment is like a blueprint for your Pods.

You define:

  • Container image

  • Number of replicas

  • Ports

  • Volumes

  • Other configurations

And Kubernetes takes care of:

  • Creating Pods

  • Scaling them

  • Replacing them if they crash

💥 Much easier to manage.

How Do Pods Talk to Each Other?

Back to our restaurant analogy 🍽️

The waiter needs to communicate with the chef, right?

But in Kubernetes… 👉 Pods don’t automatically talk to each other

We need something in between.

Services: The Communication Bridge

Services act as a bridge between Pods.

They provide:

  • Stable networking

  • Internal DNS

  • Load balancing

There are 3 main types:

🔹 ClusterIP

  • Default type

  • Used for internal communication only

  • Not accessible from outside the cluster

🔹 NodePort

  • Exposes the service on a specific port on the node

  • Accessible from outside using:

    <Node-IP>:<Port>
    

🔹 LoadBalancer

  • Exposes the app to the outside world

  • Commonly used in cloud environments (AWS, GCP, etc.)

ConfigMaps: Handling Custom Configurations

Back to the restaurant…

Imagine a customer walks in and says:

“I want a Caffè macchiato, with a little bit of soy, enough to make me go OH BOY!” — Kevin Hart fans, you know 😄

Handling custom requests manually can get messy…

But in Kubernetes, we have ConfigMaps for this.

👉 ConfigMaps allow you to:

  • Store non-confidential data

  • Use it inside your applications

  • Keep configs separate from your code

For sensitive data? 👉 Use Secrets

YAML: The Language of Kubernetes

All resources in Kubernetes are defined using YAML files.

You describe:

  • What you want

  • And Kubernetes makes it happen

If you want to explore more, check out the official docs: 👉 https://kubernetes.io/docs/setup/


⚙️ Practical Demonstration

Enough with the theory — now let’s get our hands dirty 🔥

So far, we’ve covered:

  • Deployments

  • Services

  • ConfigMaps

And to bring all of this together, we’ll deploy a three-tier application (Nginx–Node–Redis).

I’ve actually used this same app in one of my earlier projects to demonstrate CI/CD workflows with GitHub Actions and Terraform. If you’re curious, check it out here: 👉 https://blog.praveshsudha.com/cicd-for-terraform-with-github-actions-deploying-a-nodejs-redis-app-on-aws

Step 1: Clone the Project

git clone https://github.com/Pravesh-Sudha/nginx-node-redis.git

Open the project in your favorite editor (VS Code works great).

Understanding the App

This is a simple Node.js application that:

  • Displays a request counter

  • Increments the count on every refresh

  • Stores data in Redis

  • Uses Nginx as a reverse proxy (serving on port 80 instead of 5000)

Step 2: Run with Docker Compose

Before jumping into Kubernetes, let’s run it locally:

docker-compose up --build

Make sure Docker Desktop is installed and running.

You should see logs in your terminal and the app running in your browser.

Once done:

Ctrl + C

Step 3: Move to Kubernetes

Now comes the interesting part.

Inside the project:

cd nginx-node-redis/kube-config/

You’ll find three directories:

  • nginx/

  • node/

  • redis/

Each contains:

  • Deployment YAML

  • Service YAML

📦 Nginx Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

        volumeMounts:
        - name: nginx-config-volume
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf

      volumes:
      - name: nginx-config-volume
        configMap:
          name: nginx-config

A Note on AI & YAML

The best thing about AI? 👉 You can generate YAML files instantly.

But what happens when things break?

That’s where fundamentals matter.

Let’s break this down 👇

Understanding the Deployment

1. API Version & Kind

Defines what resource we are creating:

kind: Deployment

2. Labels (IMPORTANT)

Labels appear in three places — and each has a role:

  • metadata.labels → tagging the Deployment

  • spec.selector.matchLabels → tells Deployment which Pods to manage

  • template.metadata.labels → applied to Pods (used by Services)

👉 This is how Kubernetes “connects” resources.

3. Container Spec

image: nginx:1.14.2
ports:
  - containerPort: 80

Defines:

  • Image

  • Port

4. ConfigMap Mount

volumeMounts:
- name: nginx-config-volume
  mountPath: /etc/nginx/nginx.conf
  subPath: nginx.conf

👉 This mounts your custom Nginx config into the container.

🌐 Nginx Service

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: ClusterIP
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80

Here:

  • We use ClusterIP

  • Selector matches:

    app: nginx
    

👉 This connects the Service to Pods.

⚙️ Nginx ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  nginx.conf: |
    events {}

    http {
      upstream loadbalancer {
        server node-service:5000;
      }

      server {
        listen 80;

        location / {
          proxy_pass http://loadbalancer;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        location = /favicon.ico {
          log_not_found off;
          access_log off;
        }
      }
    }

👉 Here we:

  • Override default Nginx config

  • Route traffic to:

    node-service:5000
    

Step 4: Deploy to Minikube

# Start Minikube
minikube start

# Go to config directory
cd nginx-node-redis/kube-config/

# Deploy Redis
cd redis/ && kubectl apply -f deploy.yml -f svc.yml
cd ..

# Deploy Node
cd node/ && kubectl apply -f deploy.yaml -f svc.yml
cd ..

# Deploy Nginx
cd nginx/ && kubectl apply -f deploy.yml -f svc.yml -f configmap.yaml

Wait for Pods

kubectl get pods -w

Wait until all pods are:

Running


Access the App

minikube service nginx-service

👉 This opens your app in the browser — now running on Kubernetes 🎉

Self-Healing in Action

Here’s where Kubernetes shines.

Let’s break something 😈

kubectl delete pod <pod-name>

Now check:

kubectl get pods

👉 You’ll see:

  • A new pod automatically created

🧠 What just happened?

Kubernetes ensures:

“Actual state = Desired state”

Even if you:

  • Delete a pod

  • Crash a container

👉 Kubernetes will bring it back


🔍 What’s Happening Under the Hood?

Now that everything is up and running, let’s take a step back and understand how things are actually working behind the scenes 👇

1. Accessing the Application

When you run:

minikube service nginx-service

👉 Minikube exposes your service and gives you a URL with a port.

2. Request Hits Nginx Service

Once you hit that URL:

  • The Nginx Service receives the request

  • It looks at its selector:

    app: nginx
    
  • And forwards the request to all matching Nginx Pods

3. Inside the Nginx Pod

Inside the pod:

  • Nginx uses the custom config (via ConfigMap)

  • The request is proxied to:

    node-service:5000
    

4. Node Service Load Balancing

Now the interesting part 👀

  • node-service is a ClusterIP Service

  • It has multiple pods (replicas = 3)

👉 Kubernetes automatically distributes traffic:

node-service
   ↓
 ┌──────────┬──────────┬──────────┐
 │ node-pod1│ node-pod2│ node-pod3│
 └──────────┴──────────┴──────────┘

5. Node App Talks to Redis

Inside your Node app:

  • It connects to:

    redis-service
    
  • Stores:

    • Request count

    • Cache data

6. Response Flow

Finally, the response travels back:

Redis → Node → Nginx → Browser

🎉 And you see the updated request count

🧠 Key Insight

Notice something important here…

👉 We never used a single IP address.

Everything works using:

  • Service names

  • Internal DNS

  • Labels & selectors

This is called Service Discovery — one of the most powerful features of Kubernetes.

Scaling Made Easy

Want more traffic handling capacity?

Just update:

replicas: 3

👉 Increase or decrease as needed

👉 No changes required anywhere else

Kubernetes handles the rest

Cleanup

Once you’re done experimenting, you can delete the cluster:

minikube delete


🎯 Conclusion

And that’s a wrap for this one! 🚀

In this blog, we didn’t just deploy an application on Kubernetes — we actually understood what’s happening behind the scenes. From Deployments and Services to ConfigMaps and internal service discovery, you now have a solid foundation to start building real-world K8s projects.

More importantly, you saw how:

  • Kubernetes replaces static setups like Docker Compose with dynamic, scalable systems

  • Services enable seamless communication without worrying about IPs

  • And how the system self-heals to match the desired state

This is just the beginning of the K8s with Pravesh series. In the upcoming blogs, we’ll go deeper into more advanced concepts and build even more powerful systems 💥

🔗 Let’s Connect

If you found this helpful, feel free to connect with me and follow along for more DevOps and Kubernetes content:

If you have any questions, got stuck somewhere, or just want to discuss ideas — my DMs are always open 🙌

Until next time… Keep building, keep learning, and keep shipping 🚀