Blocking prohibited ILV resources by url including https

Not so long ago, I surfed the forums due to innovations from ILV, in their “law”, which, in addition to swearing phrases, doesn’t cause me anything, but nevertheless we are obliged to comply with these laws. And I found that my colleagues use just awful methods to block sites. I decided to share my method, which not only blocks by url, but also knows how to close https.

I must say right away that I do not know if anyone has posted anything like this on the network, I have come to everything myself.

What do we need? No expensive proprietary solutions, from cisco and the like, do not be alarmed. Generally speaking, I want to say a few kind words to all those who work on GNU and opensource. Thank you so much human thanks and low bow. IMHO opensource in front of the whole planet and I think the future lies with it. And this article is another confirmation of this.

The idea is this:
We create a unique list of blocked IPs in ipset. Next, add rules to NAT PREROUTING. If the destination IP is in the list, then we wrap up on our transparent proxy, where we already filter by URL.

And so from the tools - ipset , iptables , squid (ssl bump).
It’s not by chance that I sang the opensource praises, the fact is that today only squid offered a technology that allows you to look into https (at least I don’t know others). And ipset demonstrates the highest speed and keeps thousands of rules without loading the processor and without affecting the response of the network.

I will briefly describe the process of obtaining unloading. We download the program P12FromGostCSP, with its help we tear out the private key. Then convert it to pem.

$ / gost-ssl / bin / openssl pkcs12 -in p12.pfx -out provider.pem -nodes -clcerts

Just in case, check the validity period
$ openssl x509 -in provider.pem -noout -dates

Next, we build openssl with support for the guest algorithm. How to do this, a lot has been written on the Internet. I won’t go into details.

The script itself that receives the upload. (I am not the author of the script and the xml parser, I just rewritten it to format 2) The script must be called with parameters 0 and 1 with the required interval.
Thus, the result of the script is the presence of 3 ex files
ip-abuse.txt - IP list
url-abuse.txt - blocked url
subnet-abuse.txt - blocked subnets

In general, our task is to obtain the IP addresses and url in separate files.
We have data for work, now it is necessary to implement the mechanism.
Download the current version of the squid www.squid-cache.org/Versions
Unpack and compile:
$. / configure --enable-ssl --enable-ssl-crtd --with-openssl
$ make
$ sudo make install

By default, squid is set to / usr / local / squid.
Configuration file, respectively, / usr / local / squid / etc
acl - prohibition accordingly / usr / local / squid / etc / acls

Generate certificates
openssl req -new -newkey rsa: 1024 -days 3650 -nodes -x509 -keyout myCA.pem -out myCA.pem;
openssl x509 -in myCA.pem -outform DER -out myCA.der

The following is an example of my configuration file:
acl deny_url url_regex -i "/ usr / local / squid / etc / acls / zapret"
http_access deny deny_url
http_access allow all
dns_v4_first on
http_port 10.20.0.1{128 transparent
http_port 3128
#HTTPS
https_port 10.20.0.1{129 transparent ssl-bump generate-host-certificates = on dynamic_cert_mem_cache_size = 4MB cert = / usr / local
/squid/etc/myCA.pem
sslproxy_flags DONT_VERIFY_PEER
sslproxy_cert_error allow all
always_direct allow all
ssl_bump client-first all
ssl_bump server-first all
ssl_bump none all
sslcrtd_program / usr / local / squid / libexec / ssl_crtd -s / usr / local / squid / var / lib / ssl_db -M 4MB
#sslcrtd_children 5
refresh_pattern ^ ftp: 1440 20% 10080
refresh_pattern ^ gopher: 1440 0% 1440
refresh_pattern -i (/ cgi-bin / | \?) 0 0% 0
refresh_pattern. 0 20% 4320

Now you need to make a list of acls, I use this script
#! / bin / bash
> / usr / local / squid / etc / acls / zapret
cat /gost-ssl21/rzs/dump/url-abuse.txt | sort | uniq | while read LINE; do
echo $ LINE '. *' >> / usr / local / squid / etc / acls / zapret
done;
cat / root / ZAPRET / prokur | sort | uniq >> / usr / local / squid / etc / acls / zapret;
/ usr / local / squid / bin / squid -k reconfigure
exit 0

The idea is to add ". *" To the end of each url - this means any sequence of characters. For example, " casino.com . *" Will block the domain and all links including this domain.
A few more commands need to be completed:
mkdir / usr / local / squid / var / lib
/ usr / local / squid / libexec / ssl_crtd -c -s / usr / local / squid / var / lib / ssl_db

On this we will end with the squid.

Now we need to somehow wrap the “forbidden traffic” on filtering.
To do this, we create two chains in ipset: ZAPRET - a unique list of blocked IPs, ZAPRETNET - a list of blocked subnets.
ipset -N ZAPRETNET hash: net
ipset -N ZAPRET hash: ip

Create the rules yourself using iptables
# We wrap the proxy
iptables -t nat -A PREROUTING -s “subscriber subnet” -p tcp -m set --match-set ZAPRET dst -m tcp --dport 80 -j DNAT --to-destination “proxy IP”: 3128
iptables -t nat -A PREROUTING -s “subscriber subnet” -p tcp -m set --match-set ZAPRET dst -m tcp --dport 443 -j DNAT --to-destination “proxy IP”: 3129
# Block entire
iptables subnet -A FORWARD -s “subscriber subnet” -m set --match-set ZAPRETNET dst -j DROP

Now we need to fill in the ipset lists. I bring my script:
#! / bin / bash
FILENAME = "create_ruls"
> $ FILENAME;
echo 'if [-z "` ipset -L | grep ZAPRET` "];' >> $ FILENAME
echo 'then' >> $ FILENAME
echo 'ipset -N ZAPRET hash: ip' >> $ FILENAME
echo 'else' >> $ FILENAME
echo 'ipset -F ZAPRET' >> $ FILENAME
echo 'fi' >> $ FILENAME
echo 'if [-z "` ipset -L | grep ZAPRETNET` "];' >> $ FILENAME
echo 'then' >> $ FILENAME
echo 'ipset -N ZAPRETNET hash: net' >> $ FILENAME
echo 'else' >> $ FILENAME
echo 'ipset -F ZAPRETNET' >> $ FILENAME
echo 'fi' >> $ FILENAME
cat /gost-ssl21/rzs/dump/ip-abuse.txt | sort | uniq | while read LINE; do
echo ipset -A ZAPRET $ LINE >> $ FILENAME
done;
cat /gost-ssl21/rzs/dump/subnet-abuse.txt | sort | uniq | while read LINE; do
echo ipset -A ZAPRETNET $ LINE >> $ FILENAME
done;

$ sudo ./create_ruls

That's all, open the browser and try to access the forbidden resource. You should see squid-a message that access is denied. Something like this:
image
For the beauty of the plot, replace the files:
/ usr / local / squid / share / errors / en / ERR_ACCESS_DENIED
/ usr / local / squid / share / errors / ru / ERR_ACCESS_DENIED

On your page type lock - "sorry resource is blocked according to the law ....".

PS This blocking method is tested in combat conditions and works on real servers to this day. At the same time, ping did not increase by a millisecond. I also did not notice much difference in processor load. Since the ILV is very jealous of receiving the upload, you can uncomment the line in the script:
system ("/ usr / bin / gammu sendsms TEXT 7910xxxxxxx -len 400 -text 'Get data'; echo 'Unload from ILV received' | mail -s 'Unload from ILV received' kopita \ @ mail.ru");

Replacing naturally the phone and mail address with your own. You will receive SMS alerts and email alerts. For the first, I use the unlocked megaphone modem Huawei e1550 + gammu, for the second I need to configure for example Exim4. It is also necessary to enter the details of your organization in the script itself.

PS Since I understand that a good half does not at all represent what is at stake, I decided to give an explanation. A number of laws oblige , I emphasize, each operator is obliged to do this and this is not their whim! And the sanctions for not fulfilling it are very cruel in the flesh until the license is revoked! For a trivial non-receipt of unloading once a day, a fine of tens of thousands is imposed. And here, for example, the director “presented” him to the person responsible for unloading with installments of half a year.
Here is a list of laws for anyone interested in reading:
Federal Law No. 139-ФЗ dated July 28, 2012 amended the following laws of the Russian Federation:
Federal Law dated December 29, 2010 No. 436-ФЗ “On the Protection of Children from Information Harmful to Their Health and Development”;
Code of the Russian Federation on administrative offenses;
Federal Law of July 7, 2003 No. 126-ФЗ “On Communication”;
Federal Law of July 27, 2006 No. 149-ФЗ “On Information, Information Technologies and Information Protection”.


I have cited only a method that “smoothes out” the consequences of these laws. And it allows you not to lose access to a host with a blocked url.
Not all of your https traffic goes through a proxy, but only the one that goes to a blocked IP . You still have access to all other https sites without restrictions.
No one is going to deceive you, you see the left certificate and your right to refuse it and not visit the monitored resource. For the average user, this is an opportunity to fully use the rest of the unblocked resources of this hoster on a blocked IP. All complaints please here rkn.gov.ru , but from me it is not necessary to make evil of a universal scale.

Also popular now: