Serverless CI/CD: Best Practices

CI/CD stands for Continuous Integration/Continuous Delivery/Continuous Deployment, terms often used in modern software development.

Continuous Integration is the practice of merging all of a developer’s working copies to a shared repository multiple times a day. The main idea behind this is to prevent “integration hell”. In general, CI is used in combination with automated tests to ensure everything is OK before merging a developer’s work into the main shared repository. This, in turn, makes sure that any new work will not break anything in the current system.

Continuous Delivery is about producing software in short cycles, that is, building, testing, and releasing reliable software continuously without interruption. This helps to reduce the cost, time, and risk of delivering needed changes, as a lot of incremental updates are often required to software in production.

Lastly, Continuous Deployment has to do with deploying an application automatically whenever the code changes, which can be many times a day.

Traditionally, companies had a set time to release software–every month on a particular date or maybe every Tuesday. But by applying CI/CD practices in modern software development, it is now possible to release updates to applications several times a day, which is now the norm for many companies.

Each of these practices is part of a delivery pipeline that can be automated using existing software solutions available in the market. Here below, we’ll further discuss CI/CD and review some best practices to keep in mind.

Why Is CI/CD So Important in Serverless Applications?

DevOps engineers already love serverless offerings. And having an automated delivery pipeline is crucial for serverless applications since they tend to be very distributed systems where one component can change independently of the others. So, if you don’t have a well-defined automated pipeline, deploying serverless applications can become very difficult.

In traditional applications, where one application is just one service deployed in a server, integration problems are less common, as most of the operations that occur in that application happen inside of it. It is thus easy to properly unit test this kind of application.

In a serverless application, however, there are many components with different life cycles. Therefore, integration problems between the different components are very common. In this case, having an automated delivery pipeline with a proper set of automated integration tests is critical for any serverless project.

To be able to deploy your serverless applications multiple times a day and feel secure about it, you need to have an automated delivery pipeline that you can trust. With that in mind, here are some best practices to follow when designing your pipeline.

Use Infrastructure as Code

Infrastructure as code entails the management of infrastructure using source code instead of a manual process. For a serverless application, this means creating your whole serverless application through scripts and configuration files instead of going to the console and creating those same resources manually.

This practice is extremely important in modern applications because it lets you treat your infrastructure as code–putting it through the same reviews as you put the code, testing your application, and knowing why something in your infrastructure has changed. In addition, you can replicate the same infrastructure creating different environments for testing and production, so you can see how your code will run in a production-like environment before it goes to production.

Also, by using infrastructure as code, you can deploy your infrastructure via an automatic delivery pipeline. For example, if you update an existing infrastructure with a new database or storage bucket, when the code changes in your code versioning tool, it will trigger the automatic deployment. In this way, those new resources will be deployed in your infrastructure automatically.

Have an Automatic Deployment Pipeline

This is a must! As is typical with serverless applications, the components are so small that developers will sometimes finish a change and immediately deploy that change from their computer to the cloud. This should be forbidden in any project. Why? Because when a developer deploys even to a test environment from their computer, they are breaking the traceability of code.

Code should always be deployed from one centralized place. When ready, the developer should merge the code into a shared repository. Then, an automated deployment pipeline should start. This pipeline can be as automated as you want. For example, sometimes developers want to control the starting of deployment by pressing a button, and sometimes the deployment will just start when a new commit is pushed to a specific branch. The more automated the deployment pipeline the better.

This deployment pipeline should also include automatic unit testing and integration tests as well as the deployment of code to the cloud. In this way, if you find a bug in the system, you will know which version of the code has that bug. You can then fix the bug and track the change back to production. When developers simply deploy code from their computer, they sometimes haven’t run all the required tests, meaning broken code is allowed into production.

Implement Automatic Testing

As already mentioned, automatic testing is an important part of your automatic deployment pipeline. Manually testing serverless applications can be a challenge, as there are a lot of moving pieces. The best thing to do is to write good unit tests for that piece of code and then write integration tests against the different services.

For example, if your function will be triggered by an HTTP POST event, it will have some information in the body of the request for you to process and then save in a new record in the database. Your unit tests should test that the transformation performed to the body of the request is correct and that the object that you are going to save in the database is as desired. Next, you will have integration tests to make sure that the trigger works, meaning that the HTTP request works correctly against your function and that the call executed from your function to the database service is good. You might also write integration tests if you are using other functions that you’ve created. This is how you can make sure that your application works every time you deploy.

Writing automated tests sounds difficult. But once you get used to writing them, they are pretty straightforward, and you start seeing the benefits right away.

Independently Deploy Your Services

One important thing to consider when designing serverless applications is that your internal services can be deployed independently. For example, you have an application that has three services: product service, order processing, and customer processing. All of these services call each other, but you want to be sure that these services can be deployed independently. You don’t want to have to follow a set order, like first you always need to deploy the customer processing, then order processing, and finally the product. This is a very dangerous practice, especially as your application grows, and can slow down deployment speeds.

It is also very important that these services don’t share any private data and that all communication between these services happens by clearly defined APIs or queues. Only then will you achieve service independence and ensure that they are not tightly coupled together.

Separate Code from the Configuration

Another very important practice for achieving an automated delivery pipeline is to separate your code from your configuration. It is usually the configuration that is most likely to change between deployments in different environments. For example, the name of a database or the URL of a queue.

As your infrastructure is also code, you want to make sure that your infra and your code are both free from configuration data. Why do you want this? So that you can utilize this code any place you want. If your code and configuration are separated, then you can deploy exactly the same code in development, testing, and production environments and only have to change the configuration. Doing this will prevent a lot of bugs.

Additionally, if you have configuration stored in another place, then you can change how your application behaves without deploying the code again. For example, say you have an application that needs an API key to connect to some service. If you store the API key in your code, you will need to deploy the code whenever that API key changes. Instead, you can store the API key into a service that manages configuration in your cloud provider, which you refer to in your code. Then, whenever you change the API key, it will automatically change in the application.

Another reason not to store configuration in code is security. Code goes to a code repository, and configuration, in general, has sensitive information about your systems that should not be available to everybody.

Conclusion

Deploying serverless applications is one of the biggest challenges when working with serverless systems in production. The more complex serverless applications become, the more complex their deployment can be as well. It is important to keep it simple, no matter how big your application is. This way, you can keep your services independent and loosely coupled and your code and configuration separate.

Having clear boundaries between all the services that form your application will help you achieve easier maintenance of your serverless application in the long run. And automating as much as you can assist with the constant changes that modern applications face.

There are a number of tools that can help you implement these best practices. You can check out Stackery, Pulumi, or CloudBees CodeShip to help you build your automatic deployment pipeline. And if you want to find more interesting tools for deploying code to production, this link should be your next read.