A couple of useful commands that can come in handy with DDoS and not only

    In my case, the frontend server is nginx and the access log format is:

    log_format main '$ remote_addr - $ remote_user [$ time_local] "$ host" "$ request"'
    '$ status $ body_bytes_sent "$ http_referer"'
    '"$ http_user_agent" "$ http_x_forwarded_for" -> $ upstream_response_time';

    That the output gives something like this line:

    188.142.8.61 - - [14 / Sep / 2014: 22: 51: 03 +0400] “ www.mysite.ru ” “GET / HTTP / 1.1” 200 519 “ 6wwro6rq35muk. com "" Mozilla / 4.0 (compatible; MSIE 8.0; Windows NT 5.1; WOW64; Trident / 4.0; SLCC2; .NET CLR 2.0.191602; .NET CLR 3.5.191602; .NET CLR 3.0.191602 "" - "-> 0.003

    1. tail -f /var/log/nginx/nginx.access. log | cut -d '' -f 1 | logtop

    It allows you to get the big picture: the distribution of unique IPs from which requests are sent, the number of requests from one IP, etc.
    The most valuable thing is that all this works in real time and you can monitor the situation by making any changes to the configuration (for example, just ban TOP 20 most active IPs via iptables or temporarily limit the geography of requests in nginx via GeoIP http://nginx.org /ru/docs/http/ngx_http_geoip_module.html ).

    It will show (and will be updated in real time) something like:

    3199 elements in 27 seconds (118.48 elements / s)
    1 337 12.48 / s 95.65.66.183
    2 308 11.41 / s 122.29.177.10
    3 304 11.26 / s 122.18.251.54
    4 284 10.52 / s 92.98.80.164
    5 275 10.19 / s 188.239.14.134
    6 275 10.19 / s 201.87.32.17
    7 270 10.00 / s 112.185.132.118
    8 230 8.52 / s 200.77.195.44
    9 182 6.74 / s 177.35.100.49
    10 172 6.37 / s 177.34.181.245


    Where in this case the columns mean:

    • 1 - serial number
    • 2 - the number of requests from this IP
    • 3 - the number of requests per second from this IP
    • 4 - IP itself


    The total statistics for all requests are shown at the top.

    In this case, we see that with IP 95.65.66.183 there are 12.48 requests / second and 337 requests have been made in the last 27 seconds. The rest of the lines are similar.

    Let's analyze it in parts:
    tail -f /var/log/nginx/nginx.access.log - in continuous mode read the end of the log file

    cut -d '' -f 1 - divide the line into “substrings” with the separator specified in the -d flag . (in this example, a space is specified).
    Flag -f 1 - show only the field with serial number “1” (in this case, this field will contain the IP from which the request is being sent )

    logtop- Counts the number of identical lines (in this case IP), sorts them in descending order and displays them in a list, adding statistics along the way (in Debian it is installed through aptitude from the standard repository).

    2. grep "& key =" /var/log/nginx/nginx.access.log | cut -d '' -f 1 | sort | uniq -c | sort -n | tail -n 30 - will show the distribution of a line by IP in the log.

    In my case, I needed to collect statistics on how often one IP uses the & key = ... parameter in the request.

    It will show something like this:

    31 66.249.69.246
    47 66.249.69.15
    51 66.249.69.46
    53 66.249.69.30
    803 66.249.64.33
    822 66.249.64.25
    912 66.249.64.29
    1856 66.249.64.90
    1867 66.249.64.82
    1878 66.249.64.86


    • 1 - the number of occurrences of the string (in this case, IP)
    • 2 - IP itself


    In this case, we see that with IP 66.249.64.86, a total of 1878 requests arrived (and then, if we look at Whois, we see that this IP belongs to Google and is not “malicious”), we will

    analyze it in parts:

    grep "& key =" / var / log / nginx / nginx.access.log - we find all the lines in the log containing the substring "& key =" (no matter what part of the line)
    cut -d '' -f 1 - (see the previous example), output IP
    sort - sort the lines (necessary for the next command to work correctly)
    uniq -c - show unique lines + count the number of occurrences of these lines (flag -c)
    sort -n - sort using the numeric sort mode yards (flag -n)
    tail -n 30- print 30 lines with the largest number of occurrences (flag -n 30, you can specify an arbitrary number of lines)

    All requests above are for Debian or Ubuntu, but I think the commands will look similar on other Linux distributions.

    Also popular now: