Using a Hopfield Neural Network to Solve a Simple Problem

Talking about artificial intelligence and bulky expert systems is all well and good, but how can this whole theory be brought closer to life, to our cool tasks?
A little bit about the task.
It is required to write a program that in a “noisy picture” will recognize reference images.
Neural network selection
In this article, I will not consider the theoretical basis of neural networks, I think this information on the Internet is enough =) But still, without theoretical material, it will be difficult to understand what I'm writing about.
And so, the Hopfield neural network is the best suited for solving our problem.

It consists of a single layer of neurons, the number of which is simultaneously the number of inputs and outputs of the network. Each neuron is connected by synapses to all other neurons, and also has one input synapse through which the signal is input. Output signals, as usual, are formed on axons.
For this network, an input requires a certain set of binary signals, which are considered exemplary. The network should be able to select (“recall” from partial information) the corresponding sample (if any) from an arbitrary non-ideal signal fed to its input, or otherwise produce the most similar image.
In the general case, any signal can be described by the vector X = {xi: i = 1, 2, ..., n},
n is the number of neurons in the network and the dimension of the input and output vectors. Each xi is either +1 or -1.
There is a vector describing the kth pattern. When the network recognizes (or “remembers”) any sample based on the data presented to it, all outputs will contain it, i.e. Y is the vector of the output values of the network: Y = {уi: i = 1, 2, ..., n}. Otherwise, if the noise is too strong at the outputs will be "garbage".
At the initialization stage of the network, the weighting coefficients of the synapses are set as follows:

Here i and j are the indices of the presynaptic and postsynaptic neurons, respectively;
The network operation algorithm is as follows (t iteration number):
1. An unknown signal is supplied to the network inputs. In fact, its input is carried out by directly setting the values:

therefore, the designation in the circuit of the network of input synapses in explicit form is purely conditional. A zero in the bracket to the right of yi means zero iteration in the network cycle.
2. The new state of the neurons

and the new axon values are calculated

. 3. Checking whether the output axon values have changed during the last iteration. If yes, go to point 2, otherwise (if the outputs have stabilized) - the end. In this case, the output vector is a sample that is best combined with the input data.
Thus, when a new vector is supplied, the network moves from vertex to vertex until it stabilizes. A stable peak is determined by network weights and current inputs. If from top to top, until stabilized. A stable peak is determined by network weights and current inputs. If the input vector is partially incorrect or incomplete, the network stabilizes at the vertex closest to the desired one.
It is proved that a sufficient condition for the stable operation of such a network is the fulfillment of the conditions:
Wij = Wji, Wii = 0.
As I already said, sometimes the network can not perform recognition and produces a non-existent image at the output. This is due to the problem of limited network capabilities. For a Hopfield network, the number of memorized images N should not exceed a value of approximately 0.15n. In addition, if the two images A and B are very similar, they will probably cause cross associations in the network, i.e. the presentation of vector A at the network inputs will result in the appearance of vector B at its outputs and vice versa.
Let's
start writing the program. And so for a neural network, a binary sequence is needed (in our case -1/1).
Let “1” be the black color of the pixel, and “-1” be white,
so we can convert the picture into a sequence.
For speed and accuracy of restoration, we will use the following scheme: A
picture is 100x100 pixels and each pixel has its own neuron.
Thus, the network will consist of 10,000 neurons.
Let's start with the presentation of an elementary unit - a neuron:
//описание нейрона
private class Neuron
{
//изменение состояние нейрона
public void ChangeState()
{
if (s > 0) y = 1;
if (s < 0) y = -1;
if (s == 0) y = 0;
}
public int s;
public int x; //вход
public int y; //выход
public int index;
}
* This source code was highlighted with Source Code Highlighter.
Now consider the neural network itself, I think for a better understanding, the main part of the listing should be given =)
namespace Neurons
{
class NetHopfild
{
//описание нейрона
private class Neuron
{
...
}
private const int size = 10000;
public NetHopfild()
{
//инстанциирование нейронов, мытрицы связей
mass = new Neuron[size];
for (int i = 0; i < size; i++)
{
mass[i] = new Neuron();
}
matrix_of_connect = new int[size, size];
last_y = new int[size];
//
for (int i = 0; i < size; i++)
{
mass[i].index = i;
}
for (int i = 0; i < size; i++)
{
for (int j = 0; j < size; j++)
{
matrix_of_connect[i,j] = 0;
}
}
}
public void Initialise(int []input)
{
for (int i = 0; i < size; i++)
{
for (int j = 0; j < size; j++)
{
if (i == j) matrix_of_connect[i,j] = 0;
else matrix_of_connect[i,j] += input[i] * input[j];
}
}
}
public void FindImage(int []input)
{
for (int i = 0; i < size; i++) //заносим входящие значения
{
mass[i].x = input[i];
last_y[i] = mass[i].y;
}
for (int k = 0; k < size; k++) //сначало вычисляем S потом У
{
mass[k].s = 0;
for (int i = 0; i < size; i++)
{
mass[k].s = mass[k].s + matrix_of_connect[i,mass[k].index] * mass[i].x;
}
mass[k].ChangeState();
}
bool flag = true;
//проверяем на равенство входного и выходного векторов
for (int i = 0; i < size; i++)
{
if(last_y[i]!=mass[i].y)
{
flag = false;
break;
}
}
if(flag == false)
{
int[] temp = new int[size];
for (int i = 0; i < size; i++)
{
temp[i] = mass[i].y;
}
//если неустойчивое состояние, то рекурсивный вызов функции, пока не будет стабилизации сети
FindImage(temp);
}
}
private Neuron []mass;
private int[,] matrix_of_connect;
private int[] last_y;
//индексатор для синапсов нейронов (выходные значения каждого нейрона)
public int [] Synapses
{
...
}
};
}
* This source code was highlighted with Source Code Highlighter.
And so the network is ready, now we will work with it.
In this case, I will not describe the entire source, I will describe only the main points.
Converting a picture into a binary sequence
private int [] BitmapToIntArray()
{
//теперь перегоняем картинку в последовательность 1 и -1
Color currentPixel;
int[] temp = new int[size];
int index = 0;
for (int i = 0; i < picture.Height; i++)
{
for (int j = 0; j < picture.Width; j++)
{
currentPixel = picture.GetPixel(j, i);
currentPixel.ToArgb();
if (currentPixel == Color.FromArgb(255, 255, 255))
{
temp[index] = -1;
}
else
{
temp[index] = 1;
}
index++;
}
}
return temp;
}
* This source code was highlighted with Source Code Highlighter.
Results
Let there be two reference images . We train


our neural network.

Similarly, we train the second image.

Let's create noise in the picture: We

load the distorted picture into the program:

Click “Restore”, after a few seconds:

As you can see, the network having two emoticons after training gave out the one to which the distorted picture is most suitable. You can play around with the recovery process for a long time, but I think enough in the framework of this article =) Experiment, we get really interesting results;)

For those who are interested in uploading an archive with images and noises , the program itself
Well, and of course, if someone is completely inspired by the idea and needs sorts, write in the comments, I will definitely post =)
UPD: Corrected the passions in the code =) Which were pointed out: graninas , Nagg , Peregrinus