Introducing Features as a Service - OpenFaaS
- Transfer
Comment transl. : OpenFaaS is a serverless framework formally introduced in August, but appeared about a year ago and quickly established itself at the very top of GitHub projects using the Kubernetes tag . The text published below is a translation of the technical part of the official announcement of the project from its author Alex Ellis, who is well known in the community for his enthusiasm in the Docker field (has the status of Docker Captain ).
Functions as a Service or OpenFaaS is a framework for creating serverless functions on top of containers. I started the project as proof of concept in October last year, when I wanted to understand whether it is possible to run Alexa skills or AWS Lambda functions in Docker Swarm. The initial successes led me to publish in December of that year the first version of the Golang code on GitHub.
This post offers a quick introduction to serverless computing and talks about the three main features that have appeared in FaaS over the last 500 commits.
Since the first commit, FaaS has gained popularity: it has received more than 4,000 stars on GitHub (more than 7,000 today - approx. Transl. ) Anda small community of developers and hackers who talked about him at various meetings, wrote their own functions and made changes to the code. A significant event for me was getting a place among the main Moby Cool Hacks sessions at the Dockercon in Austin in April. The challenge facing these performances sounded like expanding the boundaries of what Docker was created for .
Serverless is a bad name. We are talking about a new architectural pattern for event-driven systems. Serverless functions are often used as a bridge between other services or in an event-driven architecture. We used to call it a service bus .
Serverless is evolution
A serverless function is a small, separate, and reusable piece of code that:
It is also important to distinguish between serverless products from IaaS providers and open source software projects.
On the one hand, we have serverless implementations from IaaS providers such as Lambda, Google Cloud Functions and Azure functions, and on the other hand, frameworks like OpenFaaS that allow hard work for orchestration platforms like Docker Swarm or Kubernetes.
Cloud native: use your favorite cluster.
The serverless product from the IaaS vendor is fully managed, therefore it offers a high level of usability and payment in seconds / minutes. The flip side of the coin is that you are very attached to their release and support cycle. Open source FaaS exist to provide variety and choice.
OpenFaaS is built on the basis of technologies that are the industry standard in the cloud native world:
OpenFaaS stack A
feature of the OpenFaaS project is that each process can become a serverless function using the watchdog component and the Docker container. This means three things:
Switching to serverless should not imply the need to rewrite code into another programming language. Just take advantage of what your business and team need.
The command
This is the main difference between FaaS and other open source serverless frameworks, which depend on special runtimes for each supported language.
Let's look at the 3 most significant features that have appeared since DockerCon (i.e., from April to August 2017 - approx. Transl. ) : CLIs and templates for functions, Kubernetes support, and asynchronous processing.
A console interface (CLI) was added to the FaaS project to simplify deployment features and add scripting support to them. Until now, you could use the user interface (UI) for the Gateway or curl APIs for these purposes. The appeared CLI allows you to define functions in the YAML file and deploy them to the same Gateway API.
Finnian Anderson wrote a wonderful introduction to FaaS CLI on the Practical the Dev / dev.to .
There is a special script for installing the CLI, and John McCabe helped with the recipe for brew:
or:
Using templates in the CLI, it’s enough to write a handler in your favorite programming language, after which the CLI will use a template to build this handler in a Docker container with all the FaaS magic.
Two templates have been prepared: for Python and Node.js, but it's easy to create your own.
The CLI supports three actions:
If you have a cluster of 1 node, there is no need to push images before deploying them.
Here is an example CLI configuration file in YAML (
And here is the minimal (empty) handler for a function in Python:
An example that checks the status code of a URL over HTTP (
If additional PIP modules are required, add a
After such a command, a Docker image will appear under the name
The CLI for FaaS is available in this repository .
As the captain of Docker , I mainly study the Docker Swarm and articles about it, but I have always been interested in Kubernetes. Starting to learn how to set up Kubernetes on Linux and Mac, I already wrote three guides about this and they were well received in the community.
Having a good understanding of how to port Docker Swarm concepts to Kubernetes, I prepared a prototype and ported all the code in a few days. The choice fell on creating a new microservice daemon that interacts with Kubernetes - instead of adding additional dependencies to the FaaS core code base.
FaaS proxies calls to a new daemon through a standard RESTful interface for operations such as Deploy / List / Delete / Invoke and Scale.
This approach means that the user interface, CLI, and autoscaling have worked out of the box without the need to make changes. The resulting microservice is supported in the new GitHub repository called FaaS-netes and is available in the Docker Hub. Setting it up in a cluster takes about 60 seconds.
In this video, FaaS is deployed on an empty cluster, after which it shows how to work with the user interface, Prometheus and autoscaling.
There are probably two categories of serverless frameworks for Kubernetes: those that rely on a very specific runtime for each supported programming language, and those that, like FaaS, allow any container to become a function.
FaaS has bindings for the native APIs of Docker Swarm and Kubernetes, i.e. it uses objects that you have already worked with to manage Deployments and Services . This means that there is less magic and code that needs decoding when it comes to writing new applications.
When choosing a framework, consider whether you want to bring new features or fixes to your projects. For example, OpenWhisk is written in Scala, and most others are written in Golang.
One of the features of the serverless function is that it is small and fast, executes synchronously and usually in a few seconds. But there are a number of reasons why you might want to asynchronously process a function:
The prototype for asynchronous processing began with a distributed queue. The implementation uses NATS Streaming, but can be extended to work with Kafka or any other abstraction that looks like a queue.
Illustration from the Twitter announcement of the asynchronous mode in FaaS
An asynchronous call with NATS Streaming as a backend is included in the project code base. Instructions for using it are available here .
... and it doesn’t matter if you want to help with issues, features in code, project releases, scripting, tests, performance measurement, documentation, updating examples, or even blogging about a project.
There is always something for everyone, and all this helps the project move forward.
Send any feedback, ideas, suggestions to @alexellisuk or through one of the GitHub repositories .
Not sure where to start?
Get inspired by discussions and community features that include machine learning with TensorFlow, ASCII art, and easy integrations.
A month ago, the author of this material also published instructions for getting started with OpenFaaS on Kubernetes 1.8 using Minikube.
If you are interested in the serverless topic for Kubernetes, you should also pay attention (at least) to the Kubeless and Fission projects , and the author of the article above gives a more complete list. Probably, we will write about them in our blog, but for now - read among the past materials:
Functions as a Service or OpenFaaS is a framework for creating serverless functions on top of containers. I started the project as proof of concept in October last year, when I wanted to understand whether it is possible to run Alexa skills or AWS Lambda functions in Docker Swarm. The initial successes led me to publish in December of that year the first version of the Golang code on GitHub.
This post offers a quick introduction to serverless computing and talks about the three main features that have appeared in FaaS over the last 500 commits.
Since the first commit, FaaS has gained popularity: it has received more than 4,000 stars on GitHub (more than 7,000 today - approx. Transl. ) Anda small community of developers and hackers who talked about him at various meetings, wrote their own functions and made changes to the code. A significant event for me was getting a place among the main Moby Cool Hacks sessions at the Dockercon in Austin in April. The challenge facing these performances sounded like expanding the boundaries of what Docker was created for .
What is serverless?
Architecture is evolving
Serverless is a bad name. We are talking about a new architectural pattern for event-driven systems. Serverless functions are often used as a bridge between other services or in an event-driven architecture. We used to call it a service bus .
Serverless is evolution
Serverless functions
A serverless function is a small, separate, and reusable piece of code that:
- does not live long
- is not a daemon (run for a long time),
- does not start TCP services,
- not stateful,
- uses your existing services or third-party resources,
- runs in a few seconds (default in AWS Lambda).
It is also important to distinguish between serverless products from IaaS providers and open source software projects.
On the one hand, we have serverless implementations from IaaS providers such as Lambda, Google Cloud Functions and Azure functions, and on the other hand, frameworks like OpenFaaS that allow hard work for orchestration platforms like Docker Swarm or Kubernetes.
Cloud native: use your favorite cluster.
The serverless product from the IaaS vendor is fully managed, therefore it offers a high level of usability and payment in seconds / minutes. The flip side of the coin is that you are very attached to their release and support cycle. Open source FaaS exist to provide variety and choice.
What is the feature of OpenFaaS?
OpenFaaS is built on the basis of technologies that are the industry standard in the cloud native world:
OpenFaaS stack A
feature of the OpenFaaS project is that each process can become a serverless function using the watchdog component and the Docker container. This means three things:
- You can run code in any programming language;
- at any time as required;
- everywhere.
Switching to serverless should not imply the need to rewrite code into another programming language. Just take advantage of what your business and team need.
For instance:
The command
cat
or sha512sum
may be a function that does not require any changes, since the functions interact via stdin / stdout. Windows features are also supported in Docker CE. This is the main difference between FaaS and other open source serverless frameworks, which depend on special runtimes for each supported language.
Let's look at the 3 most significant features that have appeared since DockerCon (i.e., from April to August 2017 - approx. Transl. ) : CLIs and templates for functions, Kubernetes support, and asynchronous processing.
1. New CLI
Easy Deployment
A console interface (CLI) was added to the FaaS project to simplify deployment features and add scripting support to them. Until now, you could use the user interface (UI) for the Gateway or curl APIs for these purposes. The appeared CLI allows you to define functions in the YAML file and deploy them to the same Gateway API.
Finnian Anderson wrote a wonderful introduction to FaaS CLI on the Practical the Dev / dev.to .
Utility script and brew
There is a special script for installing the CLI, and John McCabe helped with the recipe for brew:
$ brew install faas-cli
or:
$ curl -sL https://cli.get-faas.com/ | sudo sh
Patterns
Using templates in the CLI, it’s enough to write a handler in your favorite programming language, after which the CLI will use a template to build this handler in a Docker container with all the FaaS magic.
Two templates have been prepared: for Python and Node.js, but it's easy to create your own.
The CLI supports three actions:
-action build
- locally creates Docker images from templates;-action push
- uploads (push'it) images to the selected registry or Hub;-action deploy
- Deploys FaaS functions.
If you have a cluster of 1 node, there is no need to push images before deploying them.
Here is an example CLI configuration file in YAML (
sample.yml
) format :provider:
name: faas
gateway: http://localhost:8080
functions:
url_ping:
lang: python
handler: ./sample/url_ping
image: alexellis2/faas-urlping
And here is the minimal (empty) handler for a function in Python:
def handle(req):
print(req)
An example that checks the status code of a URL over HTTP (
./sample/url_ping/handler.py
):import requests
def print_url(url):
try:
r = requests.get(url,timeout = 1)
print(url +" => " + str(r.status_code))
except:
print("Timed out trying to reach URL.")
def handle(req):
print_url(req)
If additional PIP modules are required, add a
handler.py
file to your handler ( ) requirements.txt
.$ faas-cli -action build -f ./sample.yml
After such a command, a Docker image will appear under the name
alexellis2/faas-urlping
, which can be downloaded to the Docker Hub with -action push
and deployed to -action deploy
. The CLI for FaaS is available in this repository .
2. Kubernetes support
As the captain of Docker , I mainly study the Docker Swarm and articles about it, but I have always been interested in Kubernetes. Starting to learn how to set up Kubernetes on Linux and Mac, I already wrote three guides about this and they were well received in the community.
Engineering support for Kubernetes
Having a good understanding of how to port Docker Swarm concepts to Kubernetes, I prepared a prototype and ported all the code in a few days. The choice fell on creating a new microservice daemon that interacts with Kubernetes - instead of adding additional dependencies to the FaaS core code base.
FaaS proxies calls to a new daemon through a standard RESTful interface for operations such as Deploy / List / Delete / Invoke and Scale.
This approach means that the user interface, CLI, and autoscaling have worked out of the box without the need to make changes. The resulting microservice is supported in the new GitHub repository called FaaS-netes and is available in the Docker Hub. Setting it up in a cluster takes about 60 seconds.
Kubernetes Support Demo
In this video, FaaS is deployed on an empty cluster, after which it shows how to work with the user interface, Prometheus and autoscaling.
But wait ... are there any other frameworks that work on Kubernetes?
There are probably two categories of serverless frameworks for Kubernetes: those that rely on a very specific runtime for each supported programming language, and those that, like FaaS, allow any container to become a function.
FaaS has bindings for the native APIs of Docker Swarm and Kubernetes, i.e. it uses objects that you have already worked with to manage Deployments and Services . This means that there is less magic and code that needs decoding when it comes to writing new applications.
When choosing a framework, consider whether you want to bring new features or fixes to your projects. For example, OpenWhisk is written in Scala, and most others are written in Golang.
3. Asynchronous processing
One of the features of the serverless function is that it is small and fast, executes synchronously and usually in a few seconds. But there are a number of reasons why you might want to asynchronously process a function:
- This is an event, and the caller does not need the result;
- execution or initialization takes a long time - for example, TensorFlow / Machine Learning;
- a large number of requests are accepted as part of a batch job;
- you want to limit the speed.
The prototype for asynchronous processing began with a distributed queue. The implementation uses NATS Streaming, but can be extended to work with Kafka or any other abstraction that looks like a queue.
Illustration from the Twitter announcement of the asynchronous mode in FaaS
An asynchronous call with NATS Streaming as a backend is included in the project code base. Instructions for using it are available here .
Any changes are welcome.
... and it doesn’t matter if you want to help with issues, features in code, project releases, scripting, tests, performance measurement, documentation, updating examples, or even blogging about a project.
There is always something for everyone, and all this helps the project move forward.
Send any feedback, ideas, suggestions to @alexellisuk or through one of the GitHub repositories .
Not sure where to start?
Get inspired by discussions and community features that include machine learning with TensorFlow, ASCII art, and easy integrations.
PS from the translator
A month ago, the author of this material also published instructions for getting started with OpenFaaS on Kubernetes 1.8 using Minikube.
If you are interested in the serverless topic for Kubernetes, you should also pay attention (at least) to the Kubeless and Fission projects , and the author of the article above gives a more complete list. Probably, we will write about them in our blog, but for now - read among the past materials: