Working with Meta-Network Structures in Python - MetaNet Library

Published on March 11, 2015

Working with Meta-Network Structures in Python - MetaNet Library

When you see the only solution - ask others



In this article I would like to talk about some of the prerequisites for the emergence of a tool for modeling meta-networks.

Learning Automation


Initially, the problem arose of automating the training of artificial neural networks with certain time constraints. Towards its solution, an approach to the use of opposed neural networks was proposed [1]. The bottom line is to train two networks, one as usual:



And the second on the reverse reference outputs:



Where Is the reference set of outputs, y is the network output, ε is the deviation value (error), N is the number of neurons in the given layer, L is the number of the output layer, and n is the time instant. Then, to determine the saturation of one network or falling into a local minimum - we could compare it with the opposite. And in the case of a significant difference, do not waste time on the next training. But a new question arises - how to make such an analysis? Then we decided to determine the reliability of network responses by the value of the variance of the responses. The feasibility of such a solution was due to empirical observation - when saturated or falling into a local minimum - the output values ​​differ by a small spread. Having built the models, we found that in some cases this approach justifies itself. But then another question arose - and how to determine what input vectors is the network wrong with? Or - how should the degree of trust in network answers change, for example, in classification problems, when approaching specific classes of recognizable objects?

Confidence in network responses


Intuitively, we simply decided to entrust this task to another NS, constructing it in such a way that its inputs are the states of all neurons of the observed network.

Then, having trained the National Assembly, we can form a bank of her answers on a test sample. Knowing the errors and the network state matrix — to train the metaset so that it classifies errors during the operation of the core network.



As a result, we got the following chart:



The presented graphs give reason to believe that with the help of a metaset you can build an analog of confidence intervals. The MetaOut in Figure 2 represents the output of the meta-network. It can be seen that the closer the sample is to the one on which the core network is mistaken, the higher the metaset estimates the error. Err on the graph is the modulus of the difference between the target output and what the core network provides. It is worth noting that the data was provided to the network sequentially. Thus, the graph can be divided into 3 areas (by subclasses of the Iris set) [2].

Metasets


But really, my free time was taken up by the issue of knowledge in the National Assembly [3]. Turning to the question of knowledge, one cannot help turning to I. Kant. Exploring the nature of knowledge, Kant defined it as a judgment. Judging about the subject synthetically, we attach to it a sign, or a trait that is not directly contained in it. If we judge the subject analytically, we limit ourselves to its concept and do not attach anything to it that would not be contained in it. Thus, cognition can be defined as a synthetic proposition, since we call cognition only that which expands our knowledge of the subject. Further, synthetic thinking can be divided into knowledge borrowed from experience (a posteriori) and independent (a priori). Of course, the possibility of a priori synthetic knowledge was considered in detail by Kant from the point of view of philosophy, we took the liberty of considering the possibility of the latter from the point of view of neural networks (NS). For more information, you can refer to [3]. The main idea of ​​using metasets is to try to explain the operation of the NS. Thus, introducing a mutual correspondence between the set of NS states and the activity of a metaset element, we can design networks, and introducing an unambiguous one, we try to trace the logic of the network response output.

The second issue related to this topic is consciousness. A number of questions arose when Thomas Metzinger showed up. The question was whether his model of consciousness can be represented as the activity of a metaset element.

Seeing that existing solutions for modeling NS to adapt to the task is quite difficult - it was decided to write a small library. So the MetaNet project appeared. Now it is in the deep alpha stage. Let's take a look at one example. First, create multilayer perceptrons:

out_net = FeedforwardNetwork(2, [4], 2)
inp_net_1 = FeedforwardNetwork(2, [4], 2)
inp_net_2 = FeedforwardNetwork(2, [4], 2)

We will train them on XOR, after which we will create a metaset -

metanet = MetaNet()
metanet.add_input_net(inp_net_1)
metanet.add_input_net(inp_net_2)
metanet.add_output_net(out_net)
metanet.connect_nets(inp_net_1, out_net)
metanet.connect_nets(inp_net_2, out_net)
metanet.connect_nets(out_net, inp_net_1)
metanet.connect_nets(out_net, inp_net_2)

Generally speaking - now there are two functions - test and simulate. The first propagates the signal until all output networks are activated (you can set the maximum number of iterations). The second - allows you to walk the signal as much as the user allows.



If we apply the signal [[1.0, 0.0], [0.0, 0.0]] to such a metaset, then the first network will give 1, the second 0 and the output network, respectively 1. So far we have not implemented a neuron-neuron connection between networks, because by default, the connections are injective - we added one dummy output, which is always 0. When the signal passes, the post-network neurons are assigned the highest value between the current and the proposed state.

When using the test function, the network, despite cyclic communications, will return the output when out_net is activated. If you use the simulate function, you can observe the following picture of the status of the network outputs

t1 t2 t3 t4
inp_net_1 [ten] [0, 0] [ten] [0, 0]
inp_net_2 [0, 0] [0, 0] [ten] [0, 0]
out_net [0, 0] [ten] [0, 0] [0, 0]


It should be noted that as the signal passes through the network, the inputs and outputs of this network are reset.

findings


The article shows the context in which the MetaNet library is developed. I would like to ask habrahabr users to criticize both ideas and the library. At the early stage of development, I would like to take into account possible development paths, which, probably, I am now losing sight of. Regarding the basic requirements for the code - readability. I would like to make a research tool, but losing sight of performance is also not an option. Now when using, for example, mnist, the training time increases to unacceptably large values.

The library is available at the following link .

Literature