š Building an AI-Powered CI/CD Copilot with Jenkins and AWS Lambda
10 min read

š” Introduction
Hey folks, welcome to the world of Agentic Tools and DevOps.
Today, weāre diving into CI/CD pipelines and exploring how we can debug them efficiently and almost instantly using AI. In this project, weāll build an AI-powered CI/CD Copilot where AWS Lambda serves as the core logic layer. This Lambda function will interact with the Google Gemini API to analyze pipeline failures and help us debug them intelligently.
The goal of this project is not just to integrate AI into a CI/CD workflow, but to help you understand how to build your own AI agent from scratch ā one that can assist in real-world DevOps scenarios.
So, without further ado, letās get started.
š” Prerequisites
Before we begin, make sure you have the following requirements in place:
Docker & Docker Hub account
We will run parts of this project inside Docker containers. Later, weāll push our custom image to Docker Hub, so make sure you have both Docker installed and a Docker Hub account ready.Jenkins (Our CI/CD Tool)
Weāll use Jenkins for demonstration purposes. You can either:Run Jenkins as a Docker container, or
Install it directly from the official website.
Terraform
We will provision our infrastructure ā including the Gemini API key (stored securely) and the AWS Lambda function ā using Terraform.Make sure:
Terraform CLI is installed
Your AWS credentials are configured
The IAM user has permissions for AWS Lambda and AWS Secrets Manager
If youāre new to Terraform setup, you can follow this guide:
š https://blog.praveshsudha.com/getting-started-with-terraform-a-beginners-guide#heading-step-1-install-the-aws-cli
š” How It Works
The complete source code for this project is available in this GitHub repository:
š https://github.com/Pravesh-Sudha/ai-devops-agent
Navigate to the cicd-copilot directory to follow along.
If youāve been following my work, you might recognize this project. I originally used this same Node.js Book Reader application to demonstrate how Docker works with Node.js. For this AI-powered CI/CD Copilot, Iāve made specific modifications ā particularly in the Jenkinsfile and the terra-config directory.
Inside the terra-config directory, youāll find:
main.tf ā Provisions:
AWS Lambda function
AWS Secrets Manager secret (to securely store the Gemini API key)
lambda.zip ā The packaged Lambda deployment artifact (zipped
lambda_function.py)lambda_function.py ā The core of this project.
This file contains the AI agent logic and the structured prompt sent to the Gemini API.iam.tf ā Defines the IAM roles and permissions required for:
AWS Lambda
AWS Secrets Manager
Architecture Overview
The core idea behind this project is simple:
Jenkins detects a pipeline failure.
It collects contextual information (stage name, build ID, logs).
It sends that data to AWS Lambda.
Lambda calls the Gemini API.
Gemini analyzes the logs and returns structured debugging insights.
Payload Sent to Lambda
The Lambda function expects a JSON payload in the following format:
{
stage: $stage, # Name of the stage where the pipeline failed
job: $job, # Job name (e.g., cicd-copilot)
build_id: $build_id, # Build ID number (e.g., 1, 2, 3)
logs: $logs # Last 200 lines of failure logs
}
This structured input allows the AI agent to understand the pipeline context before analyzing the logs.
Prompt Sent to Gemini API
Inside the Lambda function, we make a POST request to the Gemini API with the following structured prompt:
You are a senior CI/CD Copilot specialized in Jenkins pipelines.
Pipeline context:
- Stage name: {stage}
- Expected outcome: Build an artifact usable by later stages
Your tasks:
1. Identify the failure category (build / runtime / config / infra / dependency / auth / unknown)
2. Identify the most likely root cause
3. Provide actionable fixes
4. Suggest a patch ONLY if clearly inferable
Respond ONLY in valid JSON with this schema:
{{
"failure_category": "",
"root_cause": "",
"actionable_fixes": [],
"suggested_patch": {{
"file": "",
"line": "",
"fix": ""
}}
}}
Logs:
{logs}
The prompt dynamically injects two key variables:
{stage}ā The pipeline stage name{logs}ā The failure logs
If youād like to explore the full Lambda implementation, you can view it here:
š https://github.com/Pravesh-Sudha/ai-devops-agent/blob/main/cicd-copilot/terra-config/lambda_function.py
How It Integrates with Jenkins
You might be wondering ā how exactly does this connect with Jenkins?
Inside the Jenkinsfile, each stage:
Sets an environment variable for the stage name.
Redirects command output (in case of failure) into a
LOG_FILE.
If any stage fails:
The
post { failure { ... } }block is triggered.Jenkins constructs the JSON payload.
It invokes the AWS Lambda function.
The AI-generated failure analysis is printed directly into the Jenkins console output.
This gives you instant, structured debugging assistance right inside your CI/CD pipeline.
How to Integrate This in Your Own Workspace
To replicate this approach in your own pipeline:
Append log redirection to each command:
${LOG_FILE} 2>&1Define an environment variable for the stage name.
Provision:
AWS Lambda
IAM roles
Secrets Manager (for the Gemini API key)
using Terraform.
Add a
post failureblock in your Jenkinsfile to invoke the Lambda function with the structured JSON payload.
Once configured, your CI/CD pipeline becomes AI-assisted ā capable of analyzing its own failures and suggesting actionable fixes.
š” Practical Demonstration
Enough with the theory ā letās see this in action.
Step 1: Fork and Clone the Repository
First, head over to the GitHub repository and fork it under your own username.
Youāll be intentionally modifying the code later to trigger pipeline failures, so forking is important.
After forking:
git clone https://github.com/your-username/ai-devops-agent.git
cd ai-devops-agent/cicd-copilot/terra-config
Step 2: Initialize Terraform
Inside the terra-config directory, initialize Terraform:
terraform init
Step 3: Generate Your Gemini API Key
To provision the infrastructure, youāll need a GEMINI_API_KEY.
Go to Google AI Studio
Log in with your Google account
Navigate to the API section
Click Create API Key
Give it a name and generate the key
Store it securely
Now, apply the Terraform configuration:
terraform apply -var="gemini_api_key=<Paste-your-key-here>" --auto-approve

ā ļø Make sure the configured AWS IAM user has the required permissions (Lambda and Secrets Manager access), as mentioned in the prerequisites section.
Once completed, your infrastructure (Lambda function + IAM roles + Secret) will be up and running.
Step 4: Configure Jenkins Pipeline
Open your Jenkins dashboard (usually running on http://localhost:8080).
Click Create New Item
Select Pipeline
Name it:
cicd-copilotChoose Pipeline script from SCM
Configure the following:
SCM: Git
Repository URL:
https://github.com/your-username/ai-devops-agentBranch Specifier:
mainScript Path:
cicd-copilot/Jenkinsfile
Click Save.

Step 5: Install Required Jenkins Plugins
Navigate to:
Manage Jenkins ā Plugins
Install the following plugins:
Docker
Docker Pipeline
Docker Commons
Step 6: Add Docker to Jenkins PATH
Ensure Docker is accessible inside Jenkins.
In your terminal, run:
which docker
Copy the output path.
Now go to:
Manage Jenkins ā System ā Global Properties
Append the copied path to the existing PATH variable using : as a separator. Save the configuration.

Step 7: Add Docker Hub Credentials
Navigate to:
Manage Jenkins ā Credentials
Add a new credential:
Kind: Username with password
Username: Your Docker Hub username
Password: Your Docker Hub password
ID:
docker-cred
Save it.

Step 8: Trigger the Pipeline
Now go back to your cicd-copilot project and click Build Now.
Open Console Output.
You will notice that the pipeline fails ā this is intentional.
The logs are automatically captured and sent to the AI Agent, which returns structured debugging analysis inside the Jenkins console.
In the first failure, the AI identifies a typo in the Dockerfile.
For example:
apine
It should be:
alpine

Fix the typo in your forked repository and commit the changes.
Step 9: Second Failure (Version Mismatch)
Rebuild the pipeline.
This time, the pipeline fails again ā but for a different reason. There is a Docker image version mismatch.
The AI analysis might suggest that the image is private or unavailable. However, the real issue is in the Jenkinsfile.

Inside the Run Container stage, change the image version from:
v2
to:
v1

Commit the change and rebuild the pipeline.
Step 10: Successful Pipeline Run
Now, when you trigger the pipeline again:
The build succeeds
The Docker image is pushed to your Docker Hub account
The container starts successfully
Visit:
http://localhost:3000

You should see the Book Reader application running.


Stop the Application
To stop the running container:
docker kill cicd-copilot
Clean Up Infrastructure
To avoid unnecessary AWS charges, destroy the infrastructure:
terraform destroy -var="gemini_api_key=<Paste-your-key-here>" --auto-approve
What We Achieved
In this project, we built an AI-powered CI/CD Copilot using:
Jenkins for pipeline orchestration
AWS Lambda for AI agent logic
AWS Secrets Manager for secure API storage
Google Gemini API for log analysis
The agent receives contextual pipeline information and failure logs, analyzes them intelligently, and provides structured debugging insights directly inside the CI/CD workflow.
Instead of manually scanning logs, you now have an AI assistant that understands context, categorizes failures, identifies root causes, and suggests actionable fixes ā making debugging faster, smarter, and more efficient.
š” Conclusion
Modern CI/CD pipelines are powerful ā but when they fail, debugging can quickly become time-consuming and frustrating. In this project, we went a step further by integrating AI directly into the pipeline workflow.
By combining:
Jenkins for orchestration
AWS Lambda for serverless execution
AWS Secrets Manager for secure API handling
Google Gemini API for intelligent log analysis
we built an AI-powered CI/CD Copilot capable of understanding pipeline context, analyzing failure logs, identifying root causes, and suggesting actionable fixes ā all automatically.
This isnāt just about log analysis. Itās about shifting from reactive debugging to intelligent, context-aware automation.
As AI continues to evolve, integrating agentic systems into DevOps workflows will become increasingly common. Building projects like this not only strengthens your cloud and automation skills but also prepares you for the next wave of AI-driven infrastructure.
If you found this project helpful, feel free to connect with me and follow my work:
š Website: https://praveshsudha.com
š Blog: https://blog.praveshsudha.com
š¼ LinkedIn: https://www.linkedin.com/in/pravesh-sudha
š GitHub: https://github.com/Pravesh-Sudha
š¦ Twitter/X: https://x.com/praveshstwt
š„ Youtube: https://youtube.com/@pravesh-sudha
I regularly share content on DevOps, AWS, Terraform, CI/CD, and building real-world cloud projects from scratch.
If you build your own version of this AI CI/CD Copilot, tag me ā Iād love to see what you create.
Happy Building š
