AWS Serverless Application Framework

    We decided to create a small framework for serverless web applications in AWS. It may be more correct to call this not a framework, but a workpiece, - I do not know. But the point is to create the foundation for the rapid development of serverless applications in AWS. The code is posted on GitHub and is open to any improvements, of which there are many.

    The article will focus on how to develop and test serverless applications locally, on frontend and backend routing, on Amazon services and the like. Who cares, welcome to cat!

    Something like a preface

    Until recently, the development of serverless applications was greatly complicated by the fact that there were no means of full local testing of lambda-functions and APIs. When creating applications, it was necessary either to work all the time online, editing the code in the browser, or to constantly archive and upload the source code of lambda functions to the cloud.

    In the summer of 2017, a breakthrough occurred. AWS created a new simplified CloudFormation template standard, which they called the Serverless Application Model (SAM) and launched the sam-local project at the same time . First things first.

    Amazon CloudFormation- This is a service that will allow you to describe all the AWS infrastructure that your application needs using a template file in JSON or YAML format. This is a very, very convenient thing. Because without it, you need to manually create a lot of the resources you need manually through the web console or command interface: lambda functions, database, APIs, roles and policies ...

    Using CloudFormation, you can draw the infrastructure either in a special designer or write it with your hands in a template . In any case, the result is a template file, with the help of which, in a couple of clicks, you can raise everything you need for the application with a single command. And then, if necessary, make changes to this template and apply them again with one command. This makes application infrastructure support much easier. It turns out infrastructure like code.

    CloudFormation is beautiful, its templates allow you to describe almost 100% of AWS resources. But because of its versatility, this is a rather “verbose” format - templates can quickly grow to a decent size. Realizing this and aiming to make serverless application development easier, AWS has created a new SAM format .

    We can conditionally assume that the usual CloudFormation templates are written in a low-level language. And SAM templates are in a high-level language, thus allowing to describe the infrastructure of serverless applications using simplified syntax. SAM templates are converted by CloudFront to regular templates upon deployment.

    What is sam-local? This is a command line tool that allows you to work locally with serverless applications described by SAM templates. Sam-local allows you to test lambda functions, generate events from various AWS services, run the Gateway API, check SAM templates - and all this locally!

    Sam-local uses a docker container to emulate the Gateway and Lambda APIs. The principle of operation is as follows. When launched, sam-local looks for the SAM template file in the project folder. It analyzes the template file and launches the resources allocated in the template in the docker container: it opens the API and connects lambda functions to them. Moreover, support is very close to the work of real lambda-functions (limits, the amount of used memory and the duration of execution are shown).

    It looks something like this

    Georgiy@Baltimore MINGW64 /h/dropbox/projects/aberp/lambda (master)
    $ sam local start-api --docker-volume-basedir /h/Dropbox/Projects/aberp/lambda "aberp"
    ←[34mINFO←[0m[0000] Unable to use system certificate pool: crypto/x509: system root pool is not available on Windows
    2018/04/04 22:33:49 Connected to Docker 1.35
    ←[34mINFO←[0m[0001] Unable to use system certificate pool: crypto/x509: system root pool is not available on Windows
    2018/04/04 22:33:50 Fetching lambci/lambda:nodejs6.10 image for nodejs6.10 runtime...
    nodejs6.10: Pulling from lambci/lambda
    ←[1B06c3813f: Already exists
    ←[1B967675e1: Already exists
    ←[1Bdaa0d714: Pulling fs layer
    ←[1BDigest: sha256:56205b1ec69e0fa6c32e9658d94ef6f3f5ec08b2d60876deefcbbd72fc8cb12f52kB/2.052kBB
    Status: Downloaded newer image for lambci/lambda:nodejs6.10
    ←[32;1mMounting index.handler (nodejs6.10) at{proxy+} [OPTIONS GET HEAD POST PUT DELETE PATCH]←[0
    You can now browse to the above endpoints to invoke your functions.
    You do not need to restart/reload SAM CLI while working on your functions,
    changes will be reflected instantly/automatically. You only need to restart
    SAM CLI if you update your AWS SAM template.

    Next, accessing the local API and calling the corresponding lambda-functions is displayed in the console in the same way as lambda-functions display information in the CloudWatch logs:

    2018/04/04 22:36:06 Invoking index.handler (nodejs6.10)
    2018/04/04 22:36:06 Mounting /h/Dropbox/Projects/aberp/lambda as /var/task:ro inside runtime container
    ←[32mSTART RequestId: 9fee783c-285c-127d-b5b5-491bff5d4df5 Version: $LATEST←[0m
    ←[32mEND RequestId: 9fee783c-285c-127d-b5b5-491bff5d4df5←[0m
    ←[32mREPORT RequestId: 9fee783c-285c-127d-b5b5-491bff5d4df5     Duration: 476.26 ms     Billed Duration: 500 ms Memory S
    ize: 128 MB     Max Memory Used: 37 MB  ←[0m

    Sam-local is still in public beta status, but it seemed to me that it works quite stably.

    All this in general allows you to work on creating a serverless application on the local computer and this is no more complicated than creating traditional web applications.

    I can not help but mention. Sam-local has an analog - this is Serverless framework. Serverless framework is quite popular, largely due to the fact that there were no alternatives before. I do not have much experience using it, but as far as I know, it does not provide such a complete local environment as sam-local. Sam-local is developed in AWS itself, and a separate team of enthusiasts makes the serverless framework. In favor of the serverless framework, however, we can attribute the fact that it allows you to make applications less attached to a specific vendor.

    About the framework

    As I already wrote, it is needed in order to provide a quick start when creating new serverless applications. At the moment, it only supports authorization on web tokens. Further we plan to add error handling, work with forms and output of tabular data, configure the deployment mechanism. In general, in order to clone the AB-ERP repository in the future and quickly start working on applications.

    We create ERP systems, so we called it AB-ERP by analogy with the names of our other products: AB-TASKS and AB-DOC . At the same time, AB-ERP is not necessary for creating ERP systems; on the basis of it, you can make any serverless web applications.

    The application has a frontend code and a backend code. Accordingly, in the project root there are 2 folders: lambda (backend) and public (front-end):

    |   +---api
    |   +---core
        |   \---core
        |   \---core

    AB-ERP works on the principle of a one-page web application (SPA). When you deploy the application, the front-end code will need to be placed in AWS S3 and configured CloudFront in front of it. This was described in my previous article about AB-DOC in the "Development and Deployment" section.

    The backend code during deployment will be uploaded to AWS Lambda.

    AB-ERP uses MariaDB as a database. MariaDB is deployed on AWS RDS. If desired, AB-ERP can be reconfigured, for example, to work with AWS DynamoDB.

    User files will be saved in AWS S3.

    This is what the application architecture looks like:


    At the moment, everything is very, very simple. Just one Gateway API resource and just one lambda function.

    This is what the SAM template looks like:

    AWSTemplateFormatVersion : '2010-09-09'
    Transform: AWS::Serverless-2016-10-31
    Description: An example RESTful service
        Type: AWS::Serverless::Function
          Runtime: nodejs6.10
          Handler: index.handler
              Type: Api
                Path: /{proxy+}
                Method: any

    In the SAM template, we see one of our resources, ABLambdaRouter, which is a lambda function. ABLambdaRouter is called by only one ABAPI event, which comes from the API.

    Our API Gateway resource accepts requests by any methods ( ANY ) to any paths in the URL: / {proxy +} . That is, in other words, it acts as a normal two-way proxy. The Lambda function, accordingly, should assume the role of a router that will execute different code depending on the requests.

    Lambda function code (router)
    'use strict';
    const jwt = require('jsonwebtoken');
    //process.env.PROD and other env.vars are set in production only
    if(process.env.PROD === undefined){
        process.env.PROD = 0;
        process.env.SECRET = 'SOME_SECRET_CODE_672967256';
        process.env.DB_HOST = '';
        process.env.DB_NAME = 'ab-erp';
        process.env.DB_USER = 'ab-erp';
        process.env.DB_PASSWORD = 'ab-erp';
    //core modules
    const HTTP = require('core/http');
    const DB = require('core/db');
    //main handler
    exports.handler = (event, context, callback) => {
        context.callbackWaitsForEmptyEventLoop = false;
        let api;
        const [resource, action] = event.pathParameters['proxy'].split('/');
        //OPTIONS requests are proccessed by API GateWay using mock
        //sam-local can't do it, so for local development we need this 
        if(event.httpMethod === 'OPTIONS'){
            return callback(null, HTTP.response());
        //require resource module
        try {
            api = require('api/' + resource)(HTTP, DB);
        } catch(e) {
            if (e.code === 'MODULE_NOT_FOUND') {
                return callback(null, HTTP.response(404, {error: 'Resource not found.'}));
            return callback(null, HTTP.response(500));
        //call resource action
        if(api.hasOwnProperty(action)) {
            if(api[action].protected === 0){
                api[action](event, context, callback);               
            } else if (event.headers['X-Access-Token'] !== undefined) {
                let token = event.headers['X-Access-Token'];
                try {
                    event.userData = jwt.verify(token, process.env.SECRET);
                    api[action](event, context, callback);         
                } catch(error) {
                    return callback(null, HTTP.response(403, {error: 'Failed to verify token.'}));                
            } else {
                return callback(null, HTTP.response(403, {error: 'No token provided.'}));        
        } else {
            return callback(null, HTTP.response(404, {error: 'Action not found.'}));        

    The API has a two-level hierarchy: the first level is a module, the second level is an action. URLs are . The router function analyzes the pathParameters of the incoming request, tries to connect the desired module from the lambda / api folder, and then transfer the request to the desired function in this module.

    By default, functions in modules require authorization, so before calling a function from a module, our router will check for a valid token in the X-Access-Token request header. If the token is valid, the function from the module will be called; if not, error 403 will be returned.

    Why did we choose this approach, instead of creating many separate Gateway API resources and many lambda functions? First, and most importantly, the ease of configuration, deployment, and actually working with such an architecture. Secondly, this approach minimizes function cold starts. The fact is that if a function has no long calls, AWS deletes its container and then a new call takes more time to process the request.

    There are also disadvantages to this approach. We will not have the ability at the API level of the Gateway to make any special settings for different API resources.

    Maybe someone has a question, why then do I need the Gateway API, why not access lambda directly from the browser? The Gateway API provides many benefits. It can work like a CDN, in Edge Optimized mode, there is a caching of responses, it can respond to OPTION requests without calls to the backend (MOCK integration) - all this significantly speeds up the application. He also has DDOS protection and the ability to regulate traffic using restrictions. Well, he also allows you to open the application API for third-party developers.


    For the front-end, we decided not to use “large” frameworks like React, Vue.js or Angular.js, so we wrote a small router for our SPA application.

    The router stores a description of each page: which html-template and which css, js-files it needs. When a page is requested, the router downloads all the necessary files in plain text, combines them and inserts them into the div container of the application interface. When inserted into the container, the JavaScript of the opened page is executed.

    Router code
    "use strict";
    //ROUTER object
    const ROUTER = {
        pages: {
            "index": ["css/index.css", "views/index.html", "js/index.js"],
            "login": ["css/login.css", "views/login.html", "js/login.js"]
        open: function(page){
            let self = this;
                const parts = self.pages[page];
                let getters = [];
                let wrappers = [];
                for (let i = 0; i < parts.length; i++) {
                    if( /^.*\.css$/i.test(parts[i]) ){
                    } else if ( /^.*\.js$/i.test(parts[i]) ){
                    } else {
                        $.get(parts[i], null, null, 'text').promise() 
                Promise.all(getters).then(function(results) {
                    let html = '';
                    for (let i = 0; i < results.length; i++) {
                        if(wrappers[i] === ''){
                            html += results[i];
                        } else {
                            html += `<${wrappers[i]}>${results[i]}`;                        
            } else {
        updatePath: function(newPath){
            if(newPath !== window.location.pathname) { 
                history.pushState({}, null, newPath);

    Environment setting

    Everything that is required to start the project on my computer, I tried to set out the steps in README on the project github . If something does not work out, write in the comments - we will try to help. Accordingly, we will replenish README.

    For local testing, I wrote a small HTTP server on Node.js:

    const express = require('express');
    const app = express();
    app.use(function(req, res, next) {
      req.url = 'app.html';
    app.listen(80, () => console.log('Listening..'))

    Before you start, you need to run it with the node abserver.js command . When a request arrives, he searches for the file in the public folder and gives it if it is found. If the file is not found, it gives the main application file public \ app.html . This is quite enough for the SPA application to work. In production, Amazon CloudFront solves the same problem.


    AB-ERP is still very raw. We welcome any suggestions and comments, and even more commits.

    Currently, only authorization is more or less implemented in AB-ERP - I plan to talk about it in one of the following articles. What authorization options are there when working with the Gateway API and why we did not implement custom authorizer or integration with Cognito.

    Some plans for the further development of the project.

    The key components for any data application are forms for data entry and tables for their output. Therefore, the functionality for working with forms and tables will be added first.

    There is an idea to standardize work with forms (building forms on a page, validation on a backend and frontend, saving in a database) through the use of YAML templates. That is, to make it possible to describe forms in YAML templates, and then all the rest of the work on the frontend and backend so that it is done with the AB-ERP code. For the tables we will use the Datatables library , which we used in our task tracker AB-TASKS.

    When writing the article, the following tools helped me:

    • Online charting service
    • Team tree the Windows command line to draw a directory tree

    Also popular now: