Nginx + server-side Javascript
... or how to switch from PHP + JavaScript to JavaScript + JavaScript
The idea of implementing a project on a server-side JavaScript was a long time ago. The problem was the lack of suitable server software. Existing open source projects did not suit for various reasons. Installing an add-on module for Apache was not a good idea, because performance and optimizing memory usage would not be up to the mark. Using jslibs, you can configure FastCGI, but really didn’t want to leave the least chance of “502 Bad Gateway”, the ngx_http_js_module project remained in its infancy, and ngxv8not sufficiently developed to implement real applications. Therefore, I decided to make my own implementation of server-side javascript. And try to immediately program all the basic functionality so that you can test it in conditions close to reality.
It was decided to use nginx as the main web server, TraceMonkey (the javascript engine from Mozilla Firefox , former SpiderMonkey) as the javascript “engine” , and write a module for nginx that would “glue them” together. Nothing complicated, at first glance, but I really wanted to have a certain functionality (and it worked!) So that I could work normally further. Most ideas are borrowed, by the way, from PHP .
- Correct work in multi-thread conditions
- The ability to execute the script specified in the URL, rather than setting up a script handler and a handler function for each location separately
- Ability to call include (), sleep (), alert () from a script, use __FILE__ and __LINE__
- Limit the memory allocated to each script and the script run time
- Protection of files opened by the script by specifying a list of allowed folders in the settings. A bit like open_basedir in PHP
- Automatic parsing of request data (GET, POST parameters, and, of course, cookies) so as not to write data processing in javascript
- Support for application / x-www-form-urlencoded and multipart / form-data requests
- Basic authorization support
- Work with databases (first of all, MySQL and SQLite)
- Work with the file system: reading and writing files, checking the existence of files, etc.
- Script byte code caching, such as in eAccelerator
From words to action! How to compile and configure, how to test and compare ... I
don’t go deep into the assembly details, otherwise the text will turn out to be of incredible size. Users who have experience in “assembling” programs for Linux will feel quite comfortable, but for everyone else I can offer binary assembly and the ability to skip the process of self-compilation.
You will need:
- Linux
- Compilers C and C ++, autoconf 2.13
- Nginx sources
- TraceMonkey from the repository
- NSPR library
- Our module
- MySQL and SQLite (optional) + development tools
First NSPR of the latest version (at the time of writing - 4.8.2): Then TraceMonkey from the repository (at the time of writing, version 1.8.5 in the repository, and you can download the source file only for 1.7.0): This step can be problematic for several reasons. Firstly, not everyone has a team . And secondly, all Mozilla Firefox sources are downloaded from the repository. Therefore, you can replace the first line of code and download the sources only TraceMonkey: And then compile it. Next, nginx (0.8.32) and the javascript module: If everything worked out, then go to the configuration. Happy owners of the binary assembly will find that the configuration has already been completed, but once again check will not hurt. It is enough to follow these steps:
wget ftp://ftp.mozilla.org/pub/mozilla.org/nspr/releases/v4.8.2/src/nspr-4.8.2.tar.gz
tar -xzf nspr-4.8.2.tar.gz
cd nspr-4.8.2/mozilla/nsprpub
./configure --prefix=/usr/local --with-pthreads
make
sudo make install
hg clone http://hg.mozilla.org/tracemonkey/
cd tracemonkey/js/src
autoconf2.13
./configure --prefix=/usr/local --with-nspr-prefix=/usr/local --with-system-nspr --with-pthreads --enable-threadsafe
make
sudo make install
hg
# hg clone http://hg.mozilla.org/tracemonkey/
wget http://js.nnov.ru/files/tracemonkey-20100119.tar.gz
tar -xzf tracemonkey-20100119.tar.gz
wget http://sysoev.ru/nginx/nginx-0.8.32.tar.gz
tar -xzf nginx-0.8.32.tar.gz
cd nginx-0.8.32/src/http/modules
svn co http://nginx-javascript.googlecode.com/svn/trunk/ javascript
cd ../../..
./configure --prefix=/usr/local/nginx-javascript --add-module=src/http/modules/javascript
make
sudo make install
- Add the type application / x-javascript-serverside to mime.types for files that will be processed as javascript:
The .jsx extension is selected instead of the standard .js so that the server does not treat ordinary java scripts as server
# /usr/local/nginx-javascript/conf/mime.types
types {
...
application/x-javascript-serverside jsx;
...
} - Allow javascript processing in the location / section of the nginx.conf file. At the same time, change the port number on which the server will work:
# /usr/local/nginx-javascript/conf/nginx.conf
...
server {
listen 8081;
...
location / {
...
javascript on;
...
}
}
... - Run nginx:
/usr/local/nginx-javascript/sbin/nginx
- Create a test hello.jsx script:
// /usr/local/nginx-javascript/html/hello.jsx
print("Hello, people!"); - Check that hello.jsx looks like it should in the browser:
curl http://localhost:8081/hello.jsx
The comparison involved:
- Apache / 2.2.14 (prefork) + PHP / 5.2.12 (module)
- nginx / 0.8.32 (1 workflow) + javascript
- nginx / 0.8.32 (8 workflows) + javascript
First, a test cycle of 1000 requests, one after another: Now a test cycle of 1000 requests when creating 100 simultaneous connections: Conclusions from testing:
# Apache 2.2.14 (prefork) + PHP 5.2.12 (module)
ab -n 1000 http://localhost:8085/hello.php
Time per request: 5.278 [ms] (mean, across all concurrent requests)
# nginx (1 worker) + javascript
ab -n 1000 http://localhost:8081/hello.jsx
Time per request: 1.298 [ms] (mean, across all concurrent requests)
# nginx (8 workers) + javascript
ab -n 1000 http://localhost:8088/hello.jsx
Time per request: 1.322 [ms] (mean, across all concurrent requests)
# Apache 2.2 (prefork) + PHP 5.2 (module)
ab -n 1000 -c 100 http://localhost:8085/hello.php
Time per request: 1.648 [ms] (mean, across all concurrent requests)
# nginx (1 worker) + javascript
ab -n 1000 -c 100 http://localhost:8081/hello.jsx
Time per request: 1.277 [ms] (mean, across all concurrent requests)
# nginx (8 workers) + javascript
ab -n 1000 -c 100 http://localhost:8088/hello.jsx
Time per request: 0.544 [ms] (mean, across all concurrent requests)
- If requests to the server go sequentially, one after another, nginx + javascript is much faster (we have about 3 times). At the same time, nginx with one workflow works even a little bit faster. In reality, this situation almost never happens: more often, many clients open different pages at the same time.
- If requests to the server are sent simultaneously, the speed of apache + php increases (they showed almost the same speed as nginx + javascript with one workflow). But the speed of nginx + javascript with several workflows also increases (with us - more than 2 times). And nginx + javascript with one workflow remained almost unchanged
// Выводит параметры id запросов GET, POST и cookies:
print($request.get['id'], " ", $request.post['id'], " ", $request.cookie['id']);
// Отправляет заголовок Content-Type:
$result.headers.push("Content-Type: text/html; charset=UTF-8");
// Открывает базу данных, выполняет запрос SELECT с параметром, переданным в GET, и забирает одну строку результата:
var row = (new SQLite("database")).query("SELECT * FROM `table` WHERE `id`=?", $request.get['id']).fetch();
// Читает файл:
print(File.open("index.html").getChars());
// Выводит IP-адрес клиента, открывашего страницу:
print({$server.remoteAddr});
- File Download Support (coming soon!)
- Support for cURL and GD, without which it is very difficult to live
- Optimization of stat () system calls, which are now used to determine the real path to a file
ZY> Special thanks to FTM for the invite, thanks to which the topic is no longer in the sandbox
UPD> I would immediately publish it in a topic, but there were problems with karma. Thanks to everyone involved!