We write Policy server in C ++ for Unity3d



Why do we need a policy server?


In the Unity , starting with version 3.0, for the assembly under the Web player uses security mechanisms similar to those used in the Adobe Flash player. Its essence lies in the fact that when accessing the server, the client asks him for “permission”, and if the server does not “allow”, then the client will not try to connect to it. These restrictions work to access remote servers through the WWW class and using sockets. If you want to make any request using the rest protocol from your client to a remote server, you need to have special xml in the root of the domain. It should be called crossdomain.xml and have the following format:


Before the request, the client will download the security policy file, check it and, having seen that all domains are allowed, will continue to fulfill the request you made.

If you need to connect to the remote server using sockets (tcp / udp), before connecting the client will make a request to the server on port 843 to receive a security policy file that will describe which ports and which domains you can connect to:

"

If the client data does not satisfy all the parameters (domain, port), then the client will throw a SecurityException and will not try to connect to the server.

This article will focus on writing a server that will give out security policy files, in the future I will call it Policy server.

How should the policy server work?


The server operation scheme is simple:

  1. The server starts and listens on port 843 using the tcp protocol. It is possible to override the port Security.PrefetchSocketPolicy ()
  2. The client connects to the server using the tcp protocol and sends xml with a request for the security policy file:

  3. The server parses the request and sends the xml client with the security policy

In practice, the process of parsing a request does not make any sense. The value has the time that the client waits until the security policy file is received, since it increases the delay before connecting to the target port. We can modify the process of the server and give the client a security policy file immediately after the connection.

What is already there?


Currently there is a server written in Java + Netty , source code with instructions and jar . One of its key weaknesses is jre dependency. In general, deploying jre on a linux server is not a problem, but often game developers are client programmers who want to make as few body movements as possible, especially since they do not want to install jre and administer it later. Therefore, it was decided to write a Policy server in C ++, which would work as a native application on a linux machine.

A policy server written in C ++ should not be inferior in performance to the old one; ideally, it should show a much better result. Key performance metrics will be: the time that the client spends waiting for the security policy file, and the number of clients that can receive security policy files at the same time, which essentially also comes down to the timeout for the policy file.

For testing, I used this script . It works as follows:

  1. Calculates the average ping to the server
  2. Starts several threads (the number is indicated in the script)
  3. In each thread, it requests a policy file from the Policy server.
  4. If the policy file matches the one expected, then for each request, the time spent waiting
  5. Prints the results to the console. We are interested in the following values: minimum latency, maximum latency, average latency, and the same parameters without ping

The script is written in ruby, but since the standard ruby ​​interpreter does not support operating system level threads, I used jruby to work . It is most convenient to use rvm , the command to run the script will look like this:

rvm jruby do ruby test.rb

Test results for Policy server written in Java + Netty :
Average ms245
Minimum, ms116
Maximum ms693

What do you need?


In essence, the task is to write a daemon in C ++ that could listen to several ports, when connecting clients, create a socket, copy text information to the socket and close it. It is advisable to have as few dependencies as possible, and if they do exist, they should be in the repositories of the most common linux distributions. To write the code, we will use the c ++ 11 standard. As a minimum set of libraries we take:


One port - one thread


The structure of the application is quite simple: you need functionality for working with command line parameters, classes for working with streams, functionality for working with a network, functionality for working with logs. These are simple things that should not be a problem, so I will not dwell on them in detail. The code can be seen here . The problem place is the organization of processing client requests. The simplest solution is to send all the data after connecting the client socket and close the socket immediately. Those. the code responsible for handling the new connection will look like this:

void Connector::connnect(ev::io& connect_event, int )
{
	struct sockaddr_in client_addr;
	socklen_t client_len = sizeof(client_addr);
	int client_sd;
	client_sd = accept(connect_event.fd, (struct sockaddr *)&client_addr, &client_len);
	if(client_sd < 0)
		return;
	const char *data = this->server->get_text()->c_str();
	send(client_sd, (void*)data, sizeof(char) * strlen(data), 0);
	shutdown(client_sd, 2);
	close(client_sd);
}

When I tried to test on a large number of threads (300, 10 connections each), I could not wait for the test script to finish working. From which we can conclude that this decision does not suit us.

Async


The operation of transmitting data over the network is time-consuming, it is obvious that it is necessary to separate the process of creating a client socket and the process of sending data. It would also be nice to give data in multiple threads. A good solution is to use std :: async , which appeared in the async C ++ 11 standard. The code responsible for handling the new connection will look like this:

void Connector::connnect(ev::io& connect_event, int )
{
	struct sockaddr_in client_addr;
	socklen_t client_len = sizeof(client_addr);
	int client_sd;
	client_sd = accept(connect_event.fd, (struct sockaddr *)&client_addr, &client_len);
	std::async(std::launch::async, [client_addr, this](int client_socket) {
		const char * data = this->server->get_text()->c_str();
		send(client_socket, (void*)data, sizeof(char) * strlen(data), 0);
		shutdown(client_socket, 2);
		close(client_socket);
	}, client_sd);
}

The downside of this decision is the lack of control over resources. With minimal intervention in the code, we get the ability to send data to the client asynchronously, while we can not control the process of generating new threads. The process of creating a new thread is expensive for the operating system, and a large number of threads can reduce server performance.

Pub / Sub


A suitable solution for this task is the publisher-subscriber pattern. The server operation scheme should look like this:
  • Several publishers, one for each port, store in the buffer the identifiers of the client sockets to which the security policy file must be sent
  • Several subscribers receive socket identifiers from the buffer, copy the security policy file into them, and close the socket.

The queue is suitable as a buffer, the first to connect to the server - the first to receive a policy file. There is a ready-made queue container in the standard C ++ library, but it will not work for us, since a thread-safe queue is required. At the same time, we need the operation of adding a new element to be non-blocking, while the read operation to be blocking. That is, when the server starts, several subscribers will be launched, who will wait until the queue is empty. As soon as data appears there, one or more handlers fire. Publishers asynchronously write socket identifiers to this queue.

Googling a bit, I found some ready-made implementations:
  1. https://github.com/cameron314/concurrentqueue .
    In this case, we are interested in the blockingconcurrentqueue , which is simply copied to the project as a header .h file. Conveniently enough, and there are no dependencies. This solution has the following disadvantages:
    • There are no methods to stop subscribers. The only way to stop them is to add data to the queue that will signal to subscribers that they need to stop work. This is quite inconvenient and could potentially cause deadlock.
    • It is supported by one person; commits have appeared rarely enough recently

  2. tbb concurrent queue .
    Multithreaded queue from the tbb library (Threading Building Blocks). The library is developed and maintained by Intel, while it has everything that we need:
    • Block Read from Queue
    • Non-Blocking Queue Entry
    • Ability to stop streams blocked while waiting for data at any time

    Among the minuses, it can be noted that such a solution increases the number of dependencies, i.e. end users will have to install tbb on their server. In the most common linux repositories, tbb can be installed through the package manager of the operating system, so there should be no problems with dependencies.

Thus, the code for creating a new connection will look like this:

void Connector::connnect(ev::io& connect_event, int )
{
	struct sockaddr_in client_addr;
	socklen_t client_len = sizeof(client_addr);
	int client_sd;
	client_sd = accept(connect_event.fd, (struct sockaddr *)&client_addr, &client_len);
	clients_queue()->push(client_sd);
	this->handled_clients++;
}

Client Socket Processing Code:

void Handler::run()
{
	LOG(INFO) << "Handler with thread id " << this->thread.get_id() << " started";
	while(this->is_run)
	{
		int socket_fd = clients_queue()->pop();
		this->handle(socket_fd);
	}
	LOG(INFO) << "Handler with thread id " << this->thread.get_id() << " stopped";
}

The code for working with the queue:

void ClientsQueue::push(int client)
{
	if(!this->queue.try_push(client))
		LOG(WARNING) << "Can't push socket " << client << " to queue";
}
int ClientsQueue::pop()
{
	int result;
	try
	{
		this->queue.pop(result);
	}
	catch(...)
	{
		result = -1;
	}
	return result;
}
void ClientsQueue::stop()
{
	this->queue.abort();
}

The code for the entire project with installation instructions can be found here . The result of a test run with ten threads handlers:
Average ms151
Minimum, ms100
Maximum ms1322

Total


Comparison of the results table
Java + NettyC ++ Pub / Sub
Average ms245151
Minimum, ms116100
Maximum ms6931322

References:

PS: At the moment, Unity Web player is going through difficult times due to the closure of npapi in top browsers. But if anyone else uses it and keeps the server on linux machines, then it can use this server, I hope it will be useful to you. Special thanks to themoonisalwaysspyingonyourfears for illustrating the article.

Also popular now: