I’m incredibly excited to announce that Epsagon has officially launched today its observability and cost monitoring platform for serverless applications. It is available on the official website and includes a free tier.
A little more than a year ago, my co-founder Ran Ribenzaft (CTO) and I founded Epsagon. We identified serverless as an emerging technology that’s quickly going to take over the cloud applications domain. It just made sense – don’t manage infrastructure, focus on the business logic, iterate faster, win. As engineers, R&D managers, and cloud users, it sounded right.
Our early customer conversations showed that troubleshooting and monitoring are the biggest problems serverless users experienced, and the recent serverless community survey proves that this is still the case.
So what is Epsagon? What are we actually launching today? I’ll try to describe it briefly.
Serverless is more than functions
The first thing that we realized is that people define “serverless” in different ways – “It’s not really a word!”, “There are servers!”, “By serverless, you mean AWS Lambda, right?”.
To me, serverless means one thing – focus on your business logic and don’t think about infrastructure. FaaS services enable that, but they are not enough. Applications need databases, storage, communication, and external APIs. In other words: serverless applications are distributed applications that contain managed services. Once we realized that, it was obvious that an observability platform for serverless has to contain, at its core, a distributed tracing technology.
Automation is critical
Why do organizations and developers want to go serverless? It’s for a simple reason: efficiency. Cost efficiency, developer efficiency, go-to-market efficiency. It enables an organization to cut the cloud costs, the DevOps spend much less time on infrastructure management and scaling, and the developers to ship directly to production multiple times a day. When thinking about performance monitoring and troubleshooting, these are not optional – they are a must, in every organization – the bigger it is, the bigger the risk of not having it is.
If the whole point of serverless is iterating fast, and developers are deploying hundreds of functions to production every day, does it make sense at all to implement manual instrumentation, logging, or tracing framework? In my opinion, it contradicts the whole purpose of going serverless. Based on this belief, we decided from day one that our product has to be automatic, with an extremely easy onboarding process, and use algorithms and AI to predict issues, instead of letting the user guess what needs to be monitored.
Cost efficiency goes both ways
One of the main selling points of serverless is the cost savings. If the utilization of the servers in organizations is less than 20 percent, it makes so much sense! Pay-per-use, don’t pay for idle, save a lot of money. And it does make sense – you should pay for what you’re using – but how much are you really using?
People have no idea how much they are using, not to mention how much they expect to use by the end of the month. And what does “using” mean? The fact is, that even if you think you’re not doing anything, your code can run, wait, get a timeout – and you’ll be the one paying at the of the month.
Therefore, to enjoy the cost benefits of serverless, we developed a technology of cost monitoring, prediction, and breakdown. This way, users can know exactly how much they are paying for their cloud compute, how this cost is split among the different services and business flows, and make sure that the expected cost for the month is according to the budget.
Business transactions, not function invocations
When thinking about serverless as “a bunch of functions”, it’s easy to take the straightforward route of “functions monitoring”. Make sure that your functions are OK, and your application will be OK – right?
Not always. If your application is rather simple, then yes – monitoring just your functions may be enough. But for distributed applications, singular functions monitoring, or logs, are just not the right tool. Distributed systems require a distributed tracing solution, and the easiest way to see it is when troubleshooting – it’s very easy to get lost in a distributed trace when trying to figure out what happened using logs of individual services.
The future is self-serve
Quickly after starting to develop our product, we noticed that the many of our users are actually developers, and not traditional operations or DevOps experts. When we thought about it, it actually made sense – developers are deploying directly to the cloud, so there is no one “in the middle”. And developers like to try products. Good documentation, easy onboarding, and you’re good to go. That’s why we decided to go for the self-serve route as quickly as possible, and this is the core of what we’re launching today.
What’s next for Epsagon?
Today is a big milestone for Epsagon, but we’re just getting started. We’ll keep developing our automatic distributed tracing technology, which is unique. We will announce support for more programming languages, frameworks, and cloud providers. Serverless is just beginning – I truly believe that it is the future of cloud applications, and I couldn’t be happier to be on this journey with such an amazing team!