Pravesh Sudha

šŸš€ Building an AI-Powered CI/CD Copilot with Jenkins and AWS Lambda

Ā·

10 min read

Cover Image for šŸš€ Building an AI-Powered CI/CD Copilot with Jenkins and AWS Lambda

šŸ’” Introduction

Hey folks, welcome to the world of Agentic Tools and DevOps.

Today, we’re diving into CI/CD pipelines and exploring how we can debug them efficiently and almost instantly using AI. In this project, we’ll build an AI-powered CI/CD Copilot where AWS Lambda serves as the core logic layer. This Lambda function will interact with the Google Gemini API to analyze pipeline failures and help us debug them intelligently.

The goal of this project is not just to integrate AI into a CI/CD workflow, but to help you understand how to build your own AI agent from scratch — one that can assist in real-world DevOps scenarios.

So, without further ado, let’s get started.


šŸ’” Prerequisites

Before we begin, make sure you have the following requirements in place:

  • Docker & Docker Hub account
    We will run parts of this project inside Docker containers. Later, we’ll push our custom image to Docker Hub, so make sure you have both Docker installed and a Docker Hub account ready.

  • Jenkins (Our CI/CD Tool)
    We’ll use Jenkins for demonstration purposes. You can either:

    • Run Jenkins as a Docker container, or

    • Install it directly from the official website.

  • Terraform
    We will provision our infrastructure — including the Gemini API key (stored securely) and the AWS Lambda function — using Terraform.

    Make sure:

    • Terraform CLI is installed

    • Your AWS credentials are configured

    • The IAM user has permissions for AWS Lambda and AWS Secrets Manager

If you’re new to Terraform setup, you can follow this guide:
šŸ‘‰ https://blog.praveshsudha.com/getting-started-with-terraform-a-beginners-guide#heading-step-1-install-the-aws-cli


šŸ’” How It Works

The complete source code for this project is available in this GitHub repository:
šŸ‘‰ https://github.com/Pravesh-Sudha/ai-devops-agent

Navigate to the cicd-copilot directory to follow along.

If you’ve been following my work, you might recognize this project. I originally used this same Node.js Book Reader application to demonstrate how Docker works with Node.js. For this AI-powered CI/CD Copilot, I’ve made specific modifications — particularly in the Jenkinsfile and the terra-config directory.

Inside the terra-config directory, you’ll find:

  • main.tf – Provisions:

    • AWS Lambda function

    • AWS Secrets Manager secret (to securely store the Gemini API key)

  • lambda.zip – The packaged Lambda deployment artifact (zipped lambda_function.py)

  • lambda_function.py – The core of this project.
    This file contains the AI agent logic and the structured prompt sent to the Gemini API.

  • iam.tf – Defines the IAM roles and permissions required for:

    • AWS Lambda

    • AWS Secrets Manager

Architecture Overview

The core idea behind this project is simple:

  1. Jenkins detects a pipeline failure.

  2. It collects contextual information (stage name, build ID, logs).

  3. It sends that data to AWS Lambda.

  4. Lambda calls the Gemini API.

  5. Gemini analyzes the logs and returns structured debugging insights.

Payload Sent to Lambda

The Lambda function expects a JSON payload in the following format:

{
   stage: $stage,        # Name of the stage where the pipeline failed
   job: $job,            # Job name (e.g., cicd-copilot)
   build_id: $build_id,  # Build ID number (e.g., 1, 2, 3)
   logs: $logs           # Last 200 lines of failure logs
}

This structured input allows the AI agent to understand the pipeline context before analyzing the logs.

Prompt Sent to Gemini API

Inside the Lambda function, we make a POST request to the Gemini API with the following structured prompt:

You are a senior CI/CD Copilot specialized in Jenkins pipelines.

Pipeline context:
- Stage name: {stage}
- Expected outcome: Build an artifact usable by later stages

Your tasks:
1. Identify the failure category (build / runtime / config / infra / dependency / auth / unknown)
2. Identify the most likely root cause
3. Provide actionable fixes
4. Suggest a patch ONLY if clearly inferable

Respond ONLY in valid JSON with this schema:
{{
  "failure_category": "",
  "root_cause": "",
  "actionable_fixes": [],
  "suggested_patch": {{
    "file": "",
    "line": "",
    "fix": ""
  }}
}}

Logs:
{logs}

The prompt dynamically injects two key variables:

  • {stage} – The pipeline stage name

  • {logs} – The failure logs

If you’d like to explore the full Lambda implementation, you can view it here:
šŸ‘‰ https://github.com/Pravesh-Sudha/ai-devops-agent/blob/main/cicd-copilot/terra-config/lambda_function.py

How It Integrates with Jenkins

You might be wondering — how exactly does this connect with Jenkins?

Inside the Jenkinsfile, each stage:

  • Sets an environment variable for the stage name.

  • Redirects command output (in case of failure) into a LOG_FILE.

If any stage fails:

  • The post { failure { ... } } block is triggered.

  • Jenkins constructs the JSON payload.

  • It invokes the AWS Lambda function.

  • The AI-generated failure analysis is printed directly into the Jenkins console output.

This gives you instant, structured debugging assistance right inside your CI/CD pipeline.

How to Integrate This in Your Own Workspace

To replicate this approach in your own pipeline:

  1. Append log redirection to each command:

    ${LOG_FILE} 2>&1
    
  2. Define an environment variable for the stage name.

  3. Provision:

    • AWS Lambda

    • IAM roles

    • Secrets Manager (for the Gemini API key)
      using Terraform.

  4. Add a post failure block in your Jenkinsfile to invoke the Lambda function with the structured JSON payload.

Once configured, your CI/CD pipeline becomes AI-assisted — capable of analyzing its own failures and suggesting actionable fixes.


šŸ’” Practical Demonstration

Enough with the theory — let’s see this in action.

Step 1: Fork and Clone the Repository

First, head over to the GitHub repository and fork it under your own username.
You’ll be intentionally modifying the code later to trigger pipeline failures, so forking is important.

After forking:

git clone https://github.com/your-username/ai-devops-agent.git
cd ai-devops-agent/cicd-copilot/terra-config

Step 2: Initialize Terraform

Inside the terra-config directory, initialize Terraform:

terraform init

Step 3: Generate Your Gemini API Key

To provision the infrastructure, you’ll need a GEMINI_API_KEY.

  1. Go to Google AI Studio

  2. Log in with your Google account

  3. Navigate to the API section

  4. Click Create API Key

  5. Give it a name and generate the key

  6. Store it securely

Now, apply the Terraform configuration:

terraform apply -var="gemini_api_key=<Paste-your-key-here>" --auto-approve

āš ļø Make sure the configured AWS IAM user has the required permissions (Lambda and Secrets Manager access), as mentioned in the prerequisites section.

Once completed, your infrastructure (Lambda function + IAM roles + Secret) will be up and running.

Step 4: Configure Jenkins Pipeline

Open your Jenkins dashboard (usually running on http://localhost:8080).

  1. Click Create New Item

  2. Select Pipeline

  3. Name it: cicd-copilot

  4. Choose Pipeline script from SCM

Configure the following:

  • SCM: Git

  • Repository URL:
    https://github.com/your-username/ai-devops-agent

  • Branch Specifier: main

  • Script Path:
    cicd-copilot/Jenkinsfile

Click Save.

Step 5: Install Required Jenkins Plugins

Navigate to:

Manage Jenkins → Plugins

Install the following plugins:

  • Docker

  • Docker Pipeline

  • Docker Commons

Step 6: Add Docker to Jenkins PATH

Ensure Docker is accessible inside Jenkins.

In your terminal, run:

which docker

Copy the output path.

Now go to:

Manage Jenkins → System → Global Properties

Append the copied path to the existing PATH variable using : as a separator. Save the configuration.

Step 7: Add Docker Hub Credentials

Navigate to:

Manage Jenkins → Credentials

  1. Add a new credential:

    • Kind: Username with password

    • Username: Your Docker Hub username

    • Password: Your Docker Hub password

    • ID: docker-cred

Save it.

Step 8: Trigger the Pipeline

Now go back to your cicd-copilot project and click Build Now.

Open Console Output.

You will notice that the pipeline fails — this is intentional.

The logs are automatically captured and sent to the AI Agent, which returns structured debugging analysis inside the Jenkins console.

In the first failure, the AI identifies a typo in the Dockerfile.
For example:

apine

It should be:

alpine

Fix the typo in your forked repository and commit the changes.

Step 9: Second Failure (Version Mismatch)

Rebuild the pipeline.

This time, the pipeline fails again — but for a different reason. There is a Docker image version mismatch.

The AI analysis might suggest that the image is private or unavailable. However, the real issue is in the Jenkinsfile.

Inside the Run Container stage, change the image version from:

v2

to:

v1

Commit the change and rebuild the pipeline.

Step 10: Successful Pipeline Run

Now, when you trigger the pipeline again:

  • The build succeeds

  • The Docker image is pushed to your Docker Hub account

  • The container starts successfully

Visit:

http://localhost:3000

You should see the Book Reader application running.

Stop the Application

To stop the running container:

docker kill cicd-copilot

Clean Up Infrastructure

To avoid unnecessary AWS charges, destroy the infrastructure:

terraform destroy -var="gemini_api_key=<Paste-your-key-here>" --auto-approve

What We Achieved

In this project, we built an AI-powered CI/CD Copilot using:

  • Jenkins for pipeline orchestration

  • AWS Lambda for AI agent logic

  • AWS Secrets Manager for secure API storage

  • Google Gemini API for log analysis

The agent receives contextual pipeline information and failure logs, analyzes them intelligently, and provides structured debugging insights directly inside the CI/CD workflow.

Instead of manually scanning logs, you now have an AI assistant that understands context, categorizes failures, identifies root causes, and suggests actionable fixes — making debugging faster, smarter, and more efficient.


šŸ’” Conclusion

Modern CI/CD pipelines are powerful — but when they fail, debugging can quickly become time-consuming and frustrating. In this project, we went a step further by integrating AI directly into the pipeline workflow.

By combining:

  • Jenkins for orchestration

  • AWS Lambda for serverless execution

  • AWS Secrets Manager for secure API handling

  • Google Gemini API for intelligent log analysis

we built an AI-powered CI/CD Copilot capable of understanding pipeline context, analyzing failure logs, identifying root causes, and suggesting actionable fixes — all automatically.

This isn’t just about log analysis. It’s about shifting from reactive debugging to intelligent, context-aware automation.

As AI continues to evolve, integrating agentic systems into DevOps workflows will become increasingly common. Building projects like this not only strengthens your cloud and automation skills but also prepares you for the next wave of AI-driven infrastructure.

If you found this project helpful, feel free to connect with me and follow my work:

I regularly share content on DevOps, AWS, Terraform, CI/CD, and building real-world cloud projects from scratch.

If you build your own version of this AI CI/CD Copilot, tag me — I’d love to see what you create.

Happy Building šŸš€