Configuring an AWS Lambda function is considered a very complex task because, among all parameters, you will have to pick the memory size for your function, which is very confusing. Developers never test their code’s memory consumption, definitely not in every use case. It then makes us select a random memory size for our function. Not everyone knows, but the memory selection affects proportionally on the allocated CPU. Currently, AWS Lambda supports 128MB up to 3008MB to choose from.
More CPU allocated basically means:
- Faster function duration — In some cases it means less latency for your customers!
- Higher costs — Pricing increases proportionally.
But then, the following question arises:
Benchmarking: Fibonacci recursion in Python
With an automated benchmark script, we could easily test all possibilities of memory size. The benchmark script already pre-warmed the function so we won’t have any cold start delays.
Without any further ado, let’s explore the chart:
We can definitely observe that (memory) size matters! More memory dramatically reduces duration.
As we can see, there’s a high correlation between shorter durations and price differences for different memory sizes. Besides, we see that at some point (2048MB => 3008MB) the performance does not increase at the expected rate, while the price does.
AWS Lambda Memory Performance Conclusion
So it’s definitely an important task to pick the right amount of memory for our functions. The tradeoff is potentially higher costs vs. shorter duration times, which leads to lower latency if the function is facing customers and users. Our recommendation is to run a few manual tests on functions to have a clue about their durations and then decide according to the results.
You can find the Fibonacci function with the benchmark script at Epsagon’s open-source repository: lambda-memory-performance-benchmark. Feel free to contribute!
UPDATE: you can now use the benchmark tool to test your own functions! looking forward to seeing results.