This article will discuss a few of the frameworks mentioned above and will go deep into OpenFaaS and Knative to present their architecture, main components, and basic installation steps. If you are interested in this topic and plan to develop serverless applications using open-source platforms, this article will give you a better understanding of these solutions.

Over the past few years, serverless architectures have been rapidly gaining in popularity. The main advantage of this technology is the ability to create and run applications without the need for infrastructure management. In other words, when using a serverless architecture, developers no longer need to allocate resources, scale and maintain servers to run applications, or manage databases and storage systems. Their sole responsibility is to write high-quality code.

There have been many open-source projects for building serverless frameworks (Apache OpenWhisk, IronFunctions, Fn from Oracle, OpenFaaS, Kubeless, Knative, Project Riff, etc). Moreover, due to the fact that open-source platforms provide access to IT innovations, many developers are interested in open-source solutions.

OpenWhisk, Firecracker & Oracle FN

Before delving into OpenFaaS and Knative, let’s briefly describe these three platforms.

Apache OpenWhisk is an open cloud platform for serverless computing that uses cloud computing resources as services. Compared to other open-source projects (Fission, Kubeless, IronFunctions), Apache OpenWhisk is characterized by a large codebase, high-quality features, and the number of contributors. However, the overly large tools for this platform (CouchDB, Kafka, Nginx, Redis, and Zookeeper) cause difficulties for developers. In addition, this platform is imperfect in terms of security.

Firecracker is a virtualization technology introduced by Amazon. This technology provides virtual machines with minimal overhead and allows for the creation and management of isolated environments and services. Firecracker offers lightweight virtual machines called micro VMs, which use hardware-based virtualization technologies for their full isolation while at the same time providing performance and flexibility at the level of conventional containers. One of the inconveniences for developers is that all the developments of this technology are written in the Rust language. A truncated software environment with a minimum set of components is also used. To save memory, reduce startup time, and increase security in environments, a modified Linux kernel is launched from which all the superfluous things have been excluded. In addition, functionality and device support are reduced. The project was developed at Amazon Web Services to improve the performance and efficiency of AWS Lambda and AWS Fargate platforms.

Oracle Fn is an open-server serverless platform that provides an additional level of abstraction for cloud systems to allow for Functions as Services (FaaS). As in other open platforms in Oracle Fn, the developer implements the logic at the level of individual functions. Unlike existing commercial FaaS platforms, such as Amazon AWS Lambda, Google Cloud Functions, and Microsoft Azure Functions, Oracle’s solution is positioned as having no vendor lock-in. The user can choose any cloud solution provider to launch the Fn infrastructure, combine different cloud systems, or run the platform on their own equipment.

Kubeless is an infrastructure that supports the deployment of serverless functions in your cluster and enables us to execute both HTTP and event switches in your Python, Node.js, or Ruby code. Kubeless is a platform that is built using Kubernetes’ core functionality, such as deployment, services, configuration cards (ConfigMaps), and so on. This saves the Kubeless base code with a small size and also means that developers do not have to replay large portions of the scheduled logic code that already exists inside the Kubernetes kernel itself.

Fission is an open-source platform that provides a serverless architecture over Kubernetes. One of the advantages of Fission is that it takes care of most of the tasks of automatically scaling resources in Kubernetes, freeing you from manual resource management. The second advantage of Fission is that you are not tied to one provider and can move freely from one to another, provided that they support Kubernetes clusters (and any other specific requirements that your application may have).

Main Benefits of Using OpenFaaS and Knative

OpenFaaS and Knative are publicly available and free open-source environments for creating and hosting serverless functions. These platforms allow you to:

    • Reduce idle resources.
    • Quickly process data.
    • Interconnect with other services.
    • Balance load with intensive processing of a large number of requests.

However, despite the advantages of both platforms and serverless computing in general, developers must assess the application’s logic before starting an implementation. This means that you must first break the logic down into separate tasks, and only then can you write any code.

For clarity, let’s consider each of these open-source serverless solutions separately.

How to Build and Deploy Serverless Functions With OpenFaaS

The main goal of OpenFaaS is to simplify serverless functions with Docker containers, allowing you to run complex and flexible infrastructures.

OpenFaas Design & Architecture

OpenFaaS is a Cloud Native serverless framework and therefore can be deployed by a user on many different cloud platforms as well as bare-metal servers. There are installation options for Kubernetes and Docker Swarm. The framework includes an API Gateway, asynchronous queue worker, and monitoring with Prometheus. OpenFaaS can be installed on Kubernetes and Docker swarm. The installation process is automated and deploys all the components necessary for running on the chosen platform. The Watchdog lives inside the function container, Docker is not the only runtime available in Kubernetes, so others can be used. This is missing the queue-worker and NATS for async task processing for example. There’s a good overview of the stack here.

OpenFaas Layer Overview

OpenFaas Layer Overview

API Gateway

The OpenFaaS Gateway provides a route to the services deployed with OpenFaaS. A consumer calls the Gateway and their request is then routed through to the correct function. Metrics are automatically collected in Prometheus and these metrics can be used to auto-scale your functions to deal with changes in load.

Function Watchdog

The OpenFaaS Watchdog is used to provide a consistent interface between a developer’s code and the platform. Almost any code can be converted to an OpenFaaS function. If your use case doesn’t fit one of the supported language templates then you can create your own OpenFaaS template using the watchdog to relay the HTTP requests to your code inside the container.


This component allows you to get the dynamics of metric changes at any time, compare them with others, convert them, and view them in text format or in the form of a graph without leaving the main page of the web interface. Prometheus stores the collected metrics in RAM and saves them to a disk upon reaching a given size limit or after a certain period of time.

Docker Swarm and Kubernetes

Docker Swarm and Kubernetes are the engines of orchestration. Components such as the API Gateway, the Function Watchdog, and an instance of Prometheus work on top of these orchestrators. It is recommended to use Kubernetes to develop products, while Docker Swarm is better to create local functions.

Moreover, all developed functions, microservices, and products are stored in the Docker container, which serves as the main OpenFaaS platform for developers and sysadmins to develop, deploy, and run serverless applications with containers.

The Main Points for Installation of OpenFaaS on Docker

The OpenFaaS API Gateway relies on the built-in functions provided by the selected Docker orchestrator. To do this, the API Gateway connects to the appropriate plugin for the selected orchestrator, records various function metrics in Prometheus, and scales functions based on alerts received from Prometheus through AlertManager.

You can install OpenFaaS to any Kubernetes cluster, whether using a local environment, your own datacenter, or a managed cloud service such as AWS EKS.

For running locally, the maintainers recommend using the KinD (Kubernetes in Docker) or k3d (k3s in Docker) project. Other options like Minikube and microk8s are also available.

Install either of the local tools from their project websites:

Once installed, you will have the option to run “kubectl”. Run “kubectl get nodes” to make sure that you see at least one Kubernetes node appearing and that your configuration is set correctly.

Now download the arkade utility. arkade is a Kubernetes app installer for developers which can be used to install OpenFaaS along with a number of other applications like the Kubernetes dashboard.

$ curl -sL | sudo sh

Now install OpenFaaS using its app:

$ arkade install openfaas

You’ll see that arkade downloads helm 3, fetches the openfaas chart and then installs it to the cluster, all with a single command.

At the end of the installation, arkade will give you info on how to find your password and how to deploy a sample function.

Info for app: openfaas
# Get the faas-cli
curl -SLsf | sudo sh
# Forward the gateway to your machine
kubectl rollout status -n openfaas deploy/gateway
kubectl port-forward -n openfaas svc/gateway 8080:8080 &
# If basic auth is enabled, you can now log into your gateway:
PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
echo -n $PASSWORD | faas-cli login --username admin --password-stdin

Type in the commands given to you starting with the CLI installation, then the port-forwarding to bring the openfaas gateway securely to your local computer on port 8080, finally the login command. You can get this screen back at any time with “arkade info openfaas”.

Different Program Languages With OpenFaas

To create and deploy a function with OpenFaaS using templates in the CLI, you can write a handler in almost any programming language. For example:

  • Create new function:
$ faas-cli new --lang prog language <<function name>>
  • Generate stack file and folder:
$ git clone \
 cd faas \
 git checkout 0.6.5 \
  • Build the function:
$ faas-cli build -f <<stack file>>
Deploy the function:
$ faas-cli deploy -f <<stack file>>

Testing the Function From OpenFaaS UI

You can quickly test the function in several ways from the OpenFaas user interface, as shown below: 

  • Go to OpenFaaS UI:

  • Use curl: 
$ curl -d "10" http://localhost:8080/function/fib
  • Use the UI

At first glance, things look relatively easy to use, however since OpenFaaS makes use of Kubernetes, you may need to also learn how to manage a cluster. Managed Kubernetes services help to make this task more approachable.

There is an entire community of OpenFaas developers on GitHub where you can find useful information as well.

Pros and Cons of OpenFaaS

OpenFaaS simplifies the building of the system. Fixing errors becomes easier, and adding new functionality to the system is much faster than in the case of a monolithic application. In other words, OpenFaaS allows you to run code in any programming language anytime and anywhere.

Since OpenFaaS has been built by a community of independent developers, it has also gained popularity and significant traction. The homepage shows dozens of production user companies such as BT, Vision Banco and RateHub. To date, OpenFaaS has the most GitHub stars of all the open source serverless projects on the CNCF Landscape.

OpenFaaS uses container images for functions, so you should bear in mind that:

  • Each function replica runs within a container, and is built into a Docker image.
  • There is an option to avoid cold starts by having a minimum level of scale such as 20/100 or 1/5
  • Scaling to zero is optional, but if used in production, you can expect just under a two-second cold-start for the first invocation
  • The queue-worker enables asynchronous invocation, so if you do scale to zero, you can decouple any cold-start from the user

Deploying and Running Functions With Knative

Knative allows you to develop and deploy container-based server applications that you can easily port between cloud providers. Knative is an open-source platform that is just starting to gain popularity but is of great interest to developers today.

Architecture and Components of Knative

The Knative architecture consists of the Building, Eventing, and Serving components.


The Building component of Knative is responsible for ensuring that container assemblies in the cluster are launched from the source code. This component works on the basis of existing Kubernetes primitives and also extends them.


The Eventing component of Knative is responsible for universal subscription, delivery, and event management as well as the creation of communication between loosely coupled architecture components. In addition, this component allows you to scale the load on the server.


The main objective of the Serving component is to support the deployment of serverless applications and features, automatic scaling from scratch, routing and network programming for Istio components, and snapshots of the deployed code and configurations. Knative uses Kubernetes as the orchestrator, and Istio performs the function of query routing and advanced load balancing.

Example of the Simplest Functions With Knative

You can use several methods to create a server application on Knative. Your choice will depend on your given skills and experience with various services including Istio, Gloo, Ambassador, Google, and especially Kubernetes Engine, IBM Cloud, Microsoft Azure Kubernetes Service, Minikube, and Gardener.

Simply select the installation file for each of the Knative components. Links to the main installation files for the three components required can be found here below:

Serving Component

Building Component

Eventing Component

Each of these components is characterized by a set of objects. More detailed information about the syntax and installation of these components can be found on Knative’s own development site.

Pros and Cons of Knative

Knative has a number of benefits. Like OpenFaaS, Knative allows you to create serverless environments using containers. This in turn allows you to get a local event-based architecture in which there are no restrictions imposed by public cloud services. Both OpenFaaS and Knative let you automate the container assembly process, which provides automatic scaling. Because of this, the capacity for serverless functions is based on predefined threshold values ​​and event-processing mechanisms.

In addition, both OpenFaaS and Knative allow you to create applications internally, in the cloud, or in a third-party data center. This means that you are not tied to any one cloud provider. 

One main drawback of Knative is the need to independently manage container infrastructure. Simply put, Knative is not aimed at end-users. However, because of this, more commercially managed Knative offers are becoming available, such as the Google Kubernetes Engine and Managed Knative for the IBM Cloud Kubernetes Service.


Despite the growing number of open-source serverless platforms, OpenFaaS and Knative will continue to gain popularity among developers. It is worth noting that these platforms can not be easily compared because they are designed for different tasks. 

From the point of view of configuration and maintenance, OpenFaas is simpler. With OpenFaas, there is no need to install all components separately as with Knative, and you don’t have to clear previous settings and resources for new developments if the required components have already been installed.

The goal of the OpenFaaS project is to make serverless simple for developers. It achieves that by focusing on the user experience, by and using contributors and end-users to co-develop the project. Knative on the other hand originated at Google and has focused on features that Google has found useful in their experience.

Read more:

Zipkin or Jaeger? The Best Open Source Tools for Distributed Tracing

How Should You Organize Your Functions in Production?

Epsagon Launches Agentless Tracing and Why That’s Important