
Scorocode Cloud Service Development: Part 1

In this article I will tell you how we developed the Scorocode cloud service, what problems we encountered, and, most importantly, I will share development plans.
A small survey at the end of the article will allow readers to cast their votes for the functions planned in the future, thereby influencing the service development strategy.
Background
Since 2011, I have been actively using Parse to conduct quick experiments with the development of mobile and web applications. The usefulness of the service was not in doubt, only some flaws periodically caused a desire to find something more convenient.
Over time, having tried several similar services, I came to the following conclusions:
- Backend as a service is needed by many developers of systems in the "client-server" architecture, since it speeds up the work at times, especially at the initial stage when the data structure and server logic are already needed, and the resource for its development is limited.
- The main functionality of such a service: structured storage and access to data using the SDK for different platforms and the ability to develop server logic. Additional functionality, such as sms / push / email messages, ready-made objects such as users / roles, on the one hand, can be implemented by developers independently, on the other hand, it speeds up work even more and allows you to focus on frontend.
- There is no service with full documentation in Russian. Yes, there are translations of pieces and a small number of examples, but they do not provide a complete understanding of the capabilities and pitfalls of the service.
In this regard, the idea arose of developing such a service for a Russian-speaking audience, in which it would be possible to realize their own wishes and the wishes of users.
I’ll omit the story of how we went the organizational path from an idea to an investment project in 2015, my colleagues will tell you better. But on the technical part I will dwell in more detail.
Development tools
After determining the minimum of functions that needed to be implemented, they began to be determined with development tools.
The option of using paid proprietary software was immediately noted in connection with its insolvency. The main reason was that over the past 5-6 years, the software development industry has undergone strong qualitative changes in a positive direction. Those tasks that previously could be solved exclusively with the help of platforms and IT “monsters” tools, today can be solved quickly and effectively with the help of modern development tools, programming languages and platforms, most of which are licensed under the MIT license.
So, we formed a list of functions, and began to choose the platform on which the main part of the service will work - the API server. In our opinion, one server should hold at least 10 thousand requests per second, so that from such servers it would be possible to assemble clusters that can withstand a load of up to 50 thousand requests per second. This number did not appear by chance. In one of the industrial systems we are developing, there are precisely such requirements for the load, and we took it as a starting point, with an aim to transfer the backend of this system to the cloud (by the way, using the requirements for the same system we were able to calculate the economic benefits of using a cloud backend) .
As a result, 3 API implementation variants with JSON exchange format were tested. Testing was conducted using Yandex.Tank. Results:
- Node.js + Express.js - 4,000 requests per second
- Node.js + Total.js - 1,500 requests per second
- Golang proprietary server - 20,000 requests per second
I add that mongoDB was unanimously chosen as the DBMS, as a modern, scalable DBMS that can withstand heavy loads, with detailed and high-quality documentation and a large number of examples and drivers for popular programming languages.
The choice was made in favor of our own development, and we began to study the architecture.
Architecture
The main task in building the service architecture was the implementation of a scalable cluster system. After the experiments, we came to the following configuration:
- The API query entry point, DNS Round Robin, distributes calls between balancers;
- Balancer - Nginx, distributes requests between API servers;
- The API server is an in-house development on Golang, implemented in the MC (Model-Controller) architecture, each server receives application data from the main database (mongoDB), including the address of the data cluster in which the application data is stored, caching this data (cache in Redis - 10 minutes, with a reset when making changes to the application);
- Data cluster - mongoDB cluster and Redis instance;
- File Storage - OpenStack Swift;
- The queue server - RabbitMQ, is used to queue jobs for running server scripts, sending messages, etc.
- Microservices - proprietary developments on Golang: migration from Parse, sending messages (email, PUSH, SMS), server-side code execution (module using the Google V8 engine)
In the process of development and testing, many small tasks arose, but in general the architecture turned out to be viable and the complex successfully passed the tests.
Functions
As I wrote above, at the initial stage we implemented the basic functionality. The boundaries of the minimum set were based on the need to migrate Parse users and the minimum requirements for backend functions for developing not very complex applications.
During the implementation of the functionality, serious and not very problems arose. I give a couple of characteristic below.
Problem 1. BSON parsing speed.
As you know, mongoDB sends data in BSON format , which is quite simple to parse and convert to JSON. Nevertheless, for large volumes, BSON parsing takes quite a decent time. For example, on a sample of 1000 medium-sized documents, BSON parsing in JSON takes more than 1.5 seconds. For us, this speed was unacceptable.
We tried to completely rewrite the mgo.v2 driver parser . Did not help. We came to the conclusion that it was possible to reduce the time either by increasing the frequency and number of cores on the server, or by shifting this task to the client.
As a result, it was decided to return all samples in BSON format with subsequent analysis in the SDK on the client. So it works to this day.
Problem 2. The speed of JavaScript triggers.
Initially, the engine that will run server-side scripts was Google V8 , and it did a great job for asynchronous scripts. But there were problems with triggers for data operations.
The V8 engine itself is very quick, but it starts relatively slowly - 150-300 milliseconds. And we had a limit on the time of the trigger - 500 milliseconds. It was unreasonable to give half of this time to the start of the engine. Create a pool of pre-launched “workers” - get a bunch of problems with switching contexts.
Therefore, for triggers, we chose the fastest option for executing JavaScript code in Golang - the Otto libraryRobert Creeman. Yes, she has certain limitations, but for the task of executing triggers she came up perfectly. Based on this library, we implemented a “terminator” of the call stack to interrupt an infinite loop of trigger calls (for example, when an
beforeInsert
operation is called in a trigger insert
). About problems and tasks that arise in the implementation process, you can write endlessly. I hope that the audience itself will indicate technical topics that it would be interesting to read about, and I will gladly talk about them.
What's next?
Now we have planned and started work on new system features. Given the consistently high level of interest in the Scorocode service, we would like to know the opinion of the community about the need to implement such functions. Ready to answer all your questions in the comments to the article.
Only registered users can participate in the survey. Please come in.
Which of the features would you like to see in the Scorocode cloud service?
- 27.2% English version of the service: interface, documentation 9
- 45.4% Registration and authorization using social networks 15
- 27.2% Payment by additional methods - Yandex. Money, Qiwi, etc. 9
- 66.6% WebSockets - the ability to create WebSocket channels client-server with sending messages and subscribing to messages from channel 22
- 24.2% npm Support - implementing the mechanism for using npm libraries in server-side scripts 8
- 24.2% Smart indexes - analysis of queries to application data and automatic construction of indexes 8
- 39.3% Bot Factory - implementing the ability to generate bots with their data and logic in Scorocode 13
- 33.3% frontend design tools - proprietary mockup tools for frontend design 11
- 36.3% Code Generation - mockup-based frontend source code generation associated with application data 12