Hybrid Logic Neuron

    If the recognizing perceptron machine responds to the elephant’s signal with a “mura” signal, the image of a camel is also “mura” and the portrait of a prominent scientist is again “mura”, this does not necessarily mean that it is faulty. She can be just philosophical.
    K. Prutkov is an engineer. Thought number 30.

    Strict logical activation function


    Copying the principle of action of a biological neuron in creating artificial neural networks, we don’t really think about what meaning the activation function of the logical model of a neuron takes. A function is always written in the form of a logical sum, a logical “AND” for a specific set of inputs, and it is the simultaneous activity of these inputs that activates our neuron. If we discard the external semantic binding of inputs, we can describe the activation of a neuron as follows. For one external event consisting of a set of incoming images, a specific group is joined from the received images into a new purely logical image - abstraction. Already for a group of such events activating a neuron, a common set is distinguished - generalization. But how the abstraction and generalization occurs depends on the rules used for training in our neuron. The closest to reality learning for one neuron has always been learning without a teacher. But even in this case, we have several principles governing self-learning. Two extremes and a compromise between them. The first extreme is the statistical finding of the most common group of images. Each time an event occurs, the currently active images are incremented by an internal counter. The second extreme consists in finding the most frequently repeated picture of all active images. Currently active images with the lowest value of the internal counter, it increases. The trade-off between the extremes is obvious. The first extreme is the statistical finding of the most common group of images. Each time an event occurs, the currently active images are incremented by an internal counter. The second extreme consists in finding the most frequently repeated picture of all active images. Currently active images with the lowest value of the internal counter, it increases. The trade-off between the extremes is obvious. The first extreme is the statistical finding of the most common group of images. Each time an event occurs, the currently active images are incremented by an internal counter. The second extreme consists in finding the most frequently repeated picture of all active images. Currently active images with the lowest value of the internal counter, it increases. The trade-off between the extremes is obvious.
    After the training stage, the logical function of neuron activation can no longer change during operation, retraining will change the results. The logical functions of neurons make up the logical framework, retraining it, we can solve new problems, but the old ones are gone. To solve new problems on the basis of a stable logical framework, you need to use existing solutions through analogies, as well as further training by adding new neurons. The logical activation function does not have the possibility of analogy because of its rigor, and there is no point in changing its rigor, this will violate the logic on which we will rely in the analogy mechanism. Our neuron has entries from the zone of its responsibility, which do not participate in the activation function. We will use them if the activation function did not work. The analogy function will work and learn at the same time, trying to remember the last situation at the inputs of a neuron. Actually the presence of two functions inside one neuron makes it a hybrid, and both of these functions can activate the neuron. But the strict logical activation function is primary, and the analogy function is secondary.

    Activation function by analogy.


    The analogy function works by the method of weights, depending on which active analogies are more, positive or negative, the neuron is activated or silent. You can enter the weight factor for analogies, to control the speed of retraining.
    The principle of the formation of a positive analogy is that the most important is the primary function of activation of a neuron, only if it works, there is reason to consider the active connections of analogies to be additional factors that accompany the recognition of the image of a neuron. The analogy value only increases when the primary activation function is triggered. The connections of analogies can even be called the context for the image recognized by the primary activation function.
    The principle of the formation of a negative analogy is that without a positive result of the primary logical activation function, a neuron with a predominant number of active positive analogies should become silent over time. But to reduce the weight of the active connection of the analogy is only necessary until the neuron calms down, that is, the reason for reducing the weight of the connection is the activation of the neuron precisely by the analogy function.

    Neural network with time logic


    By its logic, the analogy function provides the memorization of a previous event. The result depends not only on the present event, but also on the sequence of the previous ones. At the same time, the activation function does not make it possible to link the sequence of events into a strict logical framework. This can be achieved by shifting the output of the result by a neuron by one clock cycle. The shift in the output of the result allows you to connect neurons in the network in an arbitrary way, since you can first calculate the functions of all neurons in the network and only after that send the results to the outputs without violating the logic of signal propagation in the network. Such networks can memorize sequences, analyze information flows.

    Finding new to the network and adding new neurons


    An event may happen that the information being processed will be completely new to our network. That is, neurons will not be activated not only by the activation function, but even analogies will be exclusively negative. This is the case when we cannot recognize the incoming information in any way and use analogies from existing logical neurons to recognize it. This means that the presence of only active negative analogies is a sign of the discovery of new information that cannot be classified by the existing logical framework of the network. And it is in the place where the greatest number of negative active analogies arise that you need to add a new neuron for training.

    Also popular now: