Even serverless is implemented using containers, but the convenience is that you’re typically not dealing with the servers or the containers yourself. In fact, if you do want to configure things yourself, you may only be able to do so with a few items, such as memory limits or timeout settings. If you are looking for greater control, you’d need to run Kubernetes or Docker since you can only control parts of the serverless stack with functions as a service (FaaS). On the other hand, when you orchestrate containers yourself, you decide the size of the underlying infrastructure as well as the resource allocation for each container.
This is the basis of much of the serverless vs. containers debate–tradeoffs that come with greater control. But what if there was a way to work at the container level yet still not have to deal with infrastructure? There just so happens to be one solution that might rule them all–Fargate.
Fargate is AWS’ answer to the need for balance in the containers vs. serverless world. It allows users to run containers without worrying about the server and underlying infrastructure. This was the big selling point of serverless, only at a function level. Fargate now provides you the same benefits, in terms of experience and billing, at the container level.
Getting Free of Some Common Limitations
Fargate is much like Kubernetes in that you can set and tune CPU and memory requirements for your containers. It’s also like Lambda in that you then don’t need to worry about the underlying servers that it’s running on. The best of both worlds!
In reality, the actual management of servers depends on how you use Fargate. It has two “launch types” that decide how your code gets hosted:
EC2 launch type: This is where you need to set up and pay for the underlying EC2 instances–regardless of whether or not your containers are running. (If you used ECS before, that’s what this is.)
Fargate launch type: This is where you only need to configure the resource requirements. AWS will then take care of the underlying instances for you on demand. You only pay while your container is running, similar to the AWS Lambda billing model.
Interestingly enough, AWS also released their managed Kubernetes (EKS) service. So the need for ECS may be a bit questionable for some. While it’s simpler than Kubernetes, the Fargate launch type is even simpler. Fargate is a powerful addition to AWS and helps bridge the divide between serverless and containers.
Overcoming Limitations of AWS Lambda
Lambda took some harsh criticism early on, especially with regard to its execution time limit of five minutes. Since then, AWS has increased this to 15 minutes and added a number of enhancements–VPC support, additional language support, SAM, and more recently Lambda Layers.
A time limit of 15 minutes is a lot better than five. But this still may not be enough for certain tasks such as ETL. Also, Lambda is limited to 3GB of memory. On the other hand, Fargate can be configured to have as much as 30GB–10 times Lambda’s capacity–and can also leverage more vCPUs. Lastly, Lambda functions have a maximum deployment size of 250MB (including layers), while the maximum container storage size for Fargate is 10GB.
AWS Lambda vs. AWS Fargate
How about start times when using VPCs? Yes, another common gripe is that VPCs slow down Lambda start times. This is no different in the case of Fargate, since it also uses the awsvpc networking mode. At least with Lambda, if you don’t need to use VPC, you can enjoy a potentially faster start time.
Coming full circle back to containers, the biggest of all concerns with Lambda is the underlying container running your functions. You don’t really have full control over it. This can be troublesome when you need a specific distribution of Linux, library, or application installed on the virtual machine. This also brings fears of vendor lock-in. Lambda Layers are helping with this a bit since you can convert your Docker images to Lambda Layers. Just be aware that you will greatly increase complexity with this route, reducing some of Lambda’s simplicity. There is also an impact, as well as added cost, for DevOps. Plus, you will have to take into account the need to learn tools such as img2lambda. It all adds up!
In general, serverless and Lambda still present many challenges in monitoring. AWS added X-Ray and also has CloudWatch, but those capabilities are still somewhat limited. Lambda’s UX leaves a bit to be desired, and setting up alerts to key problems can be challenging.
Faster Time to Production
A key benefit of serverless functions is their simplicity, which eases the burden on DevOps and gets your services into production faster. Compare this against a typical Docker and Kubernetes production, and you’ll quickly realize how much more configuration and time goes into it all. Tools such as Helm have been created to help, but there’s still a lot of setup and management work involved.
AWS Fargate Setup
With Fargate, you configure your Tasks and Services through the AWS web console (or SDK), which utilizes images held in AWS’ Elastic Container Repository (ECR) or Docker Hub. If you’ve used ECS before, you’re likely already familiar with this. Then, any new image you push to ECR or Docker Hub can be used by updating and running the Task (or Service). If your Task definition uses the “latest” tag of an image, then you know you’ll always be running the latest, leaving you with even less to do.
Defining a task
Not a fan of AWS’ web console? Not a problem! The AWS CLI tool does have an ECS command to help, although it’s somewhat verbose. There are a few tools out there to help you simplify this. There are also other tools, such as Terraform and Pulumi. These are useful if you’re trying to work Fargate into a larger infrastructure where you wish to standardize how you manage infrastructure as code.
No matter which tool you use, you won’t need to configure EC2 instances. This can save a lot of time.
With something like Kubernetes, you’re still defining services, containers, pods, and so on. There’s always some level of service definition and orchestration, and you also need to consider the cluster, as with ECS. Only with the Fargate launch type, do you not have to worry about the cluster.
The cost of serverless versus containers is greatly debated. People have run the numbers with AWS Lambda running 24/7 with certain RAM allocations. In this case, some have found that Lambda costs more than an EC2 instance at a certain point over time. In such a case, a container running on an instance would be cheaper. Under other circumstances, maybe not. Even if you’re running something 24/7, there are a variety of other factors at play that makes it a little less clear.
In a real production scenario, it’s not hard to end up with a large bill. Let’s face it, running Kubernetes is expensive. It also gets complicated quickly. Even if you are leveraging auto-scaling underneath for the clusters, it’s still not perfect.
People also often forget to factor in the cost of configuration as well as working with Kubernetes or Docker Compose. There is value in convenience, and we do know that getting into production is faster with serverless and Fargate. If time to market is of an essence, using Kubernetes can result in more admin work and costly operations–managing numerous YAML configurations, dealing with RBAC, auto-scaling infrastructure, capacity planning, and more. Time is money, and if engineers are “fighting” with Kubernetes, it can get costly.
In this article, we discussed the differences between services like Fargate and Lambda and compared Fargate to other solutions such as Kubernetes. We also highlighted why Fargate may be the lesser known solution you’ve been waiting for.
AWS is actively adding new features to all of its services. Lambda had some limitations that have been resolved over the years, so we should expect Fargate to be no different. New features such as PrivateLink and Secrets Manager have been added to Fargate, so it’s only getting better. Plus, Fargate also got a price cut recently.
If AWS Fargate isn’t on your radar yet, it should be.