Nginx UploadProgress Module
A module for nginx, with which it is quite simple to monitor the progress of downloading files to the server. Earlier there were similar solutions, through modules to php, ruby, through third-party scripts, flash objects, etc. The author offers a universal solution at the web server level. You can find detailed information and instructions on wiki.codemongers.com (by the way, one of the best projects dedicated to nginx). Then I wanted to pay attention to something else.
Used the module in conjunction with the nginx upload module , it works with a bang. However, the files that I upload are large enough (I support file sharing), I tried to push the size of the processed files into the upload_progress directive - 1g, during the reload of nginx, I got a warning, they say you set an incomprehensible size. Strange, because in client_max_body_size I have 1g and it feels fine. Looked nginx`a source and module vyyasnelos that the module handles the directive upload_progress function ngx_parse_size , while he nginx handles the same client_max_body_size neighboring function ngx_parse_offset , the functions are exactly the same (in my opinion), except that the first does not understand the dimension of the «g», and the second understands =) In order for the module to handle the dimension normally in the upload_progress directive , it is necessary to fix the line source code (ngx_http_uploadprogress_module.c) in line 1151ngx_parse_size to ngx_parse_offset .
In general, this is not critical, because the directive normally processes the value of 1024m , but such a record looks absurd =)))
In general, I really liked the module, a very convenient solution that does not depend on the back-end.
update : Warning, made a silly mistake. I am ashamed. The upload_progress directive does not specify the maximum size of the downloaded files for the specified zone, as I thought, but the size of the RAM allocated to calculate one load in this zone. The value for this directive is 1.2m, maximum 10-20m. By setting 1g, you will spend a gigabyte of RAM for each download. Shame on you, do not repeat the mistake.
ps nevertheless a bug was found, the author was informed, he promised to fix it.
Used the module in conjunction with the nginx upload module , it works with a bang. However, the files that I upload are large enough (I support file sharing), I tried to push the size of the processed files into the upload_progress directive - 1g, during the reload of nginx, I got a warning, they say you set an incomprehensible size. Strange, because in client_max_body_size I have 1g and it feels fine. Looked nginx`a source and module vyyasnelos that the module handles the directive upload_progress function ngx_parse_size , while he nginx handles the same client_max_body_size neighboring function ngx_parse_offset , the functions are exactly the same (in my opinion), except that the first does not understand the dimension of the «g», and the second understands =) In order for the module to handle the dimension normally in the upload_progress directive , it is necessary to fix the line source code (ngx_http_uploadprogress_module.c) in line 1151ngx_parse_size to ngx_parse_offset .
In general, this is not critical, because the directive normally processes the value of 1024m , but such a record looks absurd =)))
In general, I really liked the module, a very convenient solution that does not depend on the back-end.
update : Warning, made a silly mistake. I am ashamed. The upload_progress directive does not specify the maximum size of the downloaded files for the specified zone, as I thought, but the size of the RAM allocated to calculate one load in this zone. The value for this directive is 1.2m, maximum 10-20m. By setting 1g, you will spend a gigabyte of RAM for each download. Shame on you, do not repeat the mistake.
ps nevertheless a bug was found, the author was informed, he promised to fix it.