Rapid response to DDoS attacks
One of the resources that I look after suddenly became unexpectedly popular with both good users and bad ones. Powerful, in general, iron ceased to cope with the load. The software on the server is the most common - Linux, Nginx, PHP-FPM (+ APC), MySQL, versions are the most recent. The sites spin Drupal and phpBB. Optimization at the software level (memcached, indexes in the database where they were missing) helped a little, but fundamentally did not solve the problem. And the problem is a large number of queries to statics, dynamics, and especially the base. Set the following limits in Nginx:
on connections
and speed of requests for dynamics (fastcgi_pass on php-fpm)
It’s much easier, according to the logs it is clear that no one gets into the first zone, but the second fulfills to the fullest.
But the bad guys continued to peck, and wanted to drop them earlier - at the firewall level, and for a long time.
At first he parsed the logs himself, and added particularly annoying ones via iptables to the bathhouse. Then the parcel is already on the crown every 5 minutes. Tried fail2ban. When I realized that there were a lot of bad guys, I transferred them to ipset ip hash.
Almost everything became good, but there are unpleasant moments:
- parsing / sorting logs also takes a decent (processor) time
- the server dies if a new wave starts between neighboring disassemblies (logs)
It was necessary to figure out how to quickly add violators to the black list. At first there was an idea to write / add a module to Nginx + a daemon that will update ipsets. It is possible without a demon, but then you have to run Nginx from the root, which is not beautiful. It is real to write, but I realized that there isn’t so much time. I didn’t find anything like it (maybe I was looking badly?), And came up with such an algorithm.
When the limit is exceeded, Nginx throws the 503rd Service Temporarily Unavailable error. So I decided to hook on it!
For each location we create our own page with an error
And the corresponding named location
More interesting.
We need support for CGI scripts. We install, configure, run spawn-fcgi and fcgiwrap. I already had it ready for collectd.
CGI script itself
Actually, everything is obvious, except, perhaps, SQLite. I added it just for statistics so far, but in principle it can be used to remove obsolete badies from the black list. The time of 5 minutes is also not used.
The blacklist was created like this
Each iptables rule can have its own rule, depending on configuration and imagination.
At one hoster I saw the service of a managed firewall. Replacing a small curl session in the ipset add script can filter the bad ones on an external firewall by unloading your channel and network interface.
ZY: The message of one "hacker" on the forum smiled at how quickly he put the server down. He had no idea what the server had put on him.
Additions:
Thanks to comrade megazubr for the advice on using the timeout parameter when creating a blacklist - there is no need to clean it by cron. Now the team for creating it with a timeout of 5 minutes looks like this:
Thanks also to alexkbs for suggesting security. On production servers, the fastcgi handler must be hung on a unix-socket with permissions only for nginx. In the config of which we write:
on connections
limit_conn_zone $binary_remote_addr zone=perip:10m;
limit_conn perip 100;
and speed of requests for dynamics (fastcgi_pass on php-fpm)
limit_req_zone $binary_remote_addr zone=dynamic:10m rate=2r/s;
limit_req zone=dynamic burst=10 nodelay;
It’s much easier, according to the logs it is clear that no one gets into the first zone, but the second fulfills to the fullest.
But the bad guys continued to peck, and wanted to drop them earlier - at the firewall level, and for a long time.
At first he parsed the logs himself, and added particularly annoying ones via iptables to the bathhouse. Then the parcel is already on the crown every 5 minutes. Tried fail2ban. When I realized that there were a lot of bad guys, I transferred them to ipset ip hash.
Almost everything became good, but there are unpleasant moments:
- parsing / sorting logs also takes a decent (processor) time
- the server dies if a new wave starts between neighboring disassemblies (logs)
It was necessary to figure out how to quickly add violators to the black list. At first there was an idea to write / add a module to Nginx + a daemon that will update ipsets. It is possible without a demon, but then you have to run Nginx from the root, which is not beautiful. It is real to write, but I realized that there isn’t so much time. I didn’t find anything like it (maybe I was looking badly?), And came up with such an algorithm.
When the limit is exceeded, Nginx throws the 503rd Service Temporarily Unavailable error. So I decided to hook on it!
For each location we create our own page with an error
error_page 503 =429 @blacklist;
And the corresponding named location
location @blacklist {
fastcgi_pass localhost:1234;
fastcgi_param SCRIPT_FILENAME /data/web/cgi/blacklist.sh;
include fastcgi_params;
}
More interesting.
We need support for CGI scripts. We install, configure, run spawn-fcgi and fcgiwrap. I already had it ready for collectd.
CGI script itself
#!/bin/bash
BAN_TIME=5
DB_NAME="web_black_list"
SQLITE_DB="/data/web/cgi/${DB_NAME}.sqlite3"
CREATE_TABLE_SQL="\
CREATE TABLE $DB_NAME (\
ip varchar(16) NOT NULL PRIMARY KEY,\
added DATETIME NOT NULL DEFAULT (DATETIME()),\
updated DATETIME NOT NULL DEFAULT (DATETIME()),\
counter INTEGER NOT NULL DEFAULT 0
)"
ADD_ENTRY_SQL="INSERT OR IGNORE INTO $DB_NAME (ip) VALUES (\"$REMOTE_ADDR\")"
UPD_ENTRY_SQL="UPDATE $DB_NAME SET updated=DATETIME(), counter=(counter+1) WHERE ip=\"$REMOTE_ADDR\""
SQLITE_CMD="/usr/bin/sqlite3 $SQLITE_DB"
IPSET_CMD="/usr/sbin/ipset"
$IPSET_CMD add $DB_NAME $REMOTE_ADDR > /dev/null 2>&1
if [ ! -f $SQLITE_DB ]; then
$SQLITE_CMD "$CREATE_TABLE_SQL"
fi
$SQLITE_CMD "$ADD_ENTRY_SQL"
$SQLITE_CMD "$UPD_ENTRY_SQL"
echo "Content-type: text/html"
echo ""
echo ""
echo "429 Too Many Requests "
echo ""
echo "429 Too Many Requests
"
echo "Your address ($REMOTE_ADDR) is blacklisted for $BAN_TIME minutes
"
echo "
$SERVER_SOFTWARE "
echo ""
echo ""
Actually, everything is obvious, except, perhaps, SQLite. I added it just for statistics so far, but in principle it can be used to remove obsolete badies from the black list. The time of 5 minutes is also not used.
The blacklist was created like this
ipset create web_black_list hash:ip
Each iptables rule can have its own rule, depending on configuration and imagination.
At one hoster I saw the service of a managed firewall. Replacing a small curl session in the ipset add script can filter the bad ones on an external firewall by unloading your channel and network interface.
ZY: The message of one "hacker" on the forum smiled at how quickly he put the server down. He had no idea what the server had put on him.
Additions:
Thanks to comrade megazubr for the advice on using the timeout parameter when creating a blacklist - there is no need to clean it by cron. Now the team for creating it with a timeout of 5 minutes looks like this:
ipset create web_black_list hash:ip timeout 300
Thanks also to alexkbs for suggesting security. On production servers, the fastcgi handler must be hung on a unix-socket with permissions only for nginx. In the config of which we write:
error_page 503 =429 @blacklist;
location @blacklist {
fastcgi_pass unix:/var/run/blacklist-wrap.sock-1;
fastcgi_param SCRIPT_FILENAME /data/web/cgi/blacklist.sh;
include fastcgi_params;
}
For spawn-fcgi.wrap:FCGI_SOCKET=/var/run/blacklist-wrap.sock
FCGI_PROGRAM=/usr/sbin/fcgiwrap
FCGI_EXTRA_OPTIONS="-M 0700 -U nginx -G nginx"