Nowadays, it’s barely possible to imagine production-ready software without having any kind of CI/CD strategy. However, not all developers are familiar enough with the main principles of CI/CD processes and so don’t fully realize their importance and the benefits they bring. In this article, we’ll clarify the main points behind CI/CD and also provide a basic example of a CD implementation for serverless projects.

What is CI/CD and why is it important?

Continuous integration (CI) and continuous delivery (CD) are commonly used terms today, typically used together.

  • Continuous integration is the set of approaches and practices that allow you to automate the processing of new changes coming from a developer and making sure these changes are all OK and can be used in further release steps. 
  • In turn, continuous delivery is a process of converting the codebase into final builds, which are always ready to be deployed to production quickly and easily.

Having a CI/CD strategy implemented will dramatically decrease the time needed to perform a release, prevent potential mistakes that can occur during manual deployment, and make it possible to perform releases more frequently!

AWS and CI/CD pipelines tools

Being the leading cloud provider, AWS has its own set of tools for implementing CI/CD procedures, allowing you to combine different resources into a single pipeline, which can then be easily configured and monitored. 

Probably the most important service is AWS CodeBuild. Its main job is to fetch the source code, compile it, run tests, and perform any other manipulations required for producing the final build, referred to as artifacts. In some simplified cases, this service can also be used for deployment, a scenario we’ll take a closer look at in the following sections.

Then we have AWS CodeDeploy, a service with a self-explanatory name that is responsible for running deployment processes. Usually, it takes the artifacts produced by CodeBuild and delivers them to the corresponding environment.

The last one, AWS CodePipeline, is like a configurable wrapper for all the above services. Here, you can link the above services together–even with third-party services if needed–to create the CI/CD pipeline.

What are we going to build?

Of course, CI/CD strategies are also applicable to serverless projects and share the same principles. Today, you’ll be building a basic pipeline for the AWS Lambda application. To complete each step, you’ll need to have a Github repository for the future app as well as an AWS account and node.js and the serverless framework installed.

In this example, you will implement a scenario with a minimum of different services involved. Each time a developer commits the code, CodePipeline will react by running the CodeBuild service in order to fetch the sources and build, test, and even deploy the application to AWS Lambda. 

As you probably noticed, CodeBuild will be responsible not only for building but also delivering applications to AWS Lambda, as it’s one of the simplest yet workable approaches–perfect for learning the basics.


First, we need a simple application. It can be taken from a repository of serverless examples, or you can create something more complex. The main thing is to have all the sources in your own Github repository. To prevent this article from being overloaded with code, you’ll just be using the basic app created by running the following command in the project folder:

serverless create --template aws-nodejs

This is probably the smallest serverless application possible: It consists of a single hello function responding with a greeting message. Open the newly generated serverless.yml file, and fill it with the following code:

service: sls-ci-cd
  name: aws
  runtime: nodejs12.x
    handler: handler.hello
      - http:
          path: check
          method: get

Before you start configuring the CI/CD items, go ahead and add some tests into your project, as your future pipeline should not only deliver the app but also run the tests beforehand. Install the special serverless plugin for testing by running these commands in the projects folder:

npm init
npm install -D serverless-mocha-plugin

Then, open the serverless.yml file, and add these lines:


  - serverless-mocha-plugin

Now, when mocha plugin is installed, run:

sls create test -f hello

This will create a /test/hello.js file with the basic test for the Lambda function. Let’s add some custom testing thereby adding these lines inside of the “describe” block:

it('not empty and 200', async () => {
    const response = await{});

To check your setup, run:

sls invoke test

If you see messages that the test passed successfully, then you’ve done everything correctly. Go ahead and initialize the Github repository in the project folder, and make an initial commit/push in order to have the source code in the remote repository. Now you’re ready to proceed with the AWS configuration!


Navigate to the AWS console, log in, and go to the AWS CodePipeline page. Press the “create pipeline” button to start. In this guide, you’ll be skipping almost all non-mandatory settings, leaving them with default values. So, simply enter the name of your future pipeline, and press “next.”

Now connect your Github to choose the corresponding repository and its branch.

Choose AWS CodeBuild as a build provider, and then a pop-up for creating a new CodeBuild project will open.

Figure 1: Creating a CodeBuild during pipeline creation

Here, you’ll need to select the environment where you want to run your build scripts. One of the managed images proposed by AWS will suit you just fine, so select Ubuntu, then select Standard, and finally choose the last one, 4.0, as it supports the node.js 12 runtime needed for your application according to the documentation.

Figure 2: Main settings of the CodeBuild project

You can also add additional configurations in this step, such as timeouts, compute power, and others, but let’s just leave it as it is for now.

Press the “next” button, then press the “skip deploy stage,” and finish the pipeline creation process. Note that you skipped the “deploy” section because–in this example–you’ll perform deployment in the build stage. With a more complex approach, you would probably have separate stages for build, test, approval, deploy, etc. 

Congratulations! You’ve just implemented a basic CI/CD pipeline for your AWS Lambda project. But wait, it looks like it failed its execution:

Figure 3: Build failed

This happened because you didn’t provide the buildspec file in your repository and/or because you didn’t attach proper policies to the CodeBuild service role for your project.

Fix the service role

Since you’re going to run a serverless framework for Lambda deployment inside of the CodeBuild, you need it to have some policies attached to its service role. Otherwise, AWS will block your script from making a successful deployment.

So, navigate to the “roles” section in the IAM console, find the role of your CodeBuild project, and press the “attach policies” button. Now, find and select the following policies:

  • AWSLambdaFullAccess
  • IAMFullAccess
  • AmazonS3FullAccess
  • AWSCloudFormationFullAccess
  • AmazonAPIGatewayAdministrator

Figure 4: Adding permissions for deployment with serverless framework

It’s important to understand that attaching policies with full access is not the best idea. For better security of real-world apps, permissions should be listed more granularly.

Configuring a Buildspec File

A Buildspec file is a special file in yml format that describes all the actions on the different phases of the build execution. The configuration is pretty flexible and can include a lot of things. But for this example, you only need to define commands to run on the install, pre_build, and build phases. 

Create the buildspec.yml file inside of the project folder, and fill it with the following code:

version: 0.2
      - npm install
      - npm install -g serverless
      - sls invoke test
      - sls deploy

According to this list of steps, the script will first install a serverless framework and its dependencies, then run tests for the Lambda function, and finally deploy it. Commit and push all the changes, then go back to the page of your pipeline, and press the “retry” button.

Wait for a minute or less and see that the deploy has passed correctly!

Figure 5: Pipeline executed successfully

Now try to push any changes to the repository, wait for a few seconds, reload the pipeline page, and see that a new build is in progress, meaning that it was triggered automatically! 

You can also navigate to the build-project page and open the logs of every single build by clicking on the run ID in the history section.

Figure 6: Build history

What’s next?

For the next step in learning about CI/CD, you can try to implement reusable build projects, allowing you to perform builds in accordance with environment variables. You can also start implementing pipelines with the approval stage, which is useful for production builds, where deployment can occur only after approval from an authorized person.

Each improvement you make in your CI/CD processes will save you much time and simplify your life as a developer in the future.

Our latest on AWS:

Key Metrics for Monitoring Amazon ECS and AWS Fargate

How to Effectively Monitor AWS Lambda

Get Started with EFS File System Access for AWS Lambda

AWS CloudWatch Metrics and Dashboards (Part 3)

Running Container Workloads: AWS vs. Azure vs. GCP (Part I)