Neural networks for dummies. Start



    It so happened that at the university the topic of neural networks successfully passed by my specialty, despite the huge interest on my part. Attempts at self-education were several times smashed by an ignorant brow against the indestructible walls of the citadel of science in the form of obscure “with a snap” terms and confused explanations in the dry language of university textbooks.

    In this article (a series of articles?) I will try to illuminate the topic of neural networks from the point of view of an uninitiated person, in simple language, using simple examples, arranging everything in shelves, and not “an array of neurons forms a perceptron working according to the well-known, proven pattern”.

    Interested, I ask for cat.

    Goals

    What are neural networks for?
    A neural network is a learning system. It acts not only in accordance with a given algorithm and formulas, but also based on past experience. A sort of child who each time puts together a puzzle, making fewer mistakes.

    And, as it is customary to write from fashionable authors, a neural network consists of neurons.
    Here you need to make a stop and figure it out.


    Let's agree that a neuron is just a kind of imaginary black box, which has a bunch of inlets and one outlet.
    Moreover, both incoming and outgoing information can be analogous (most often it will be so).

    How the output signal is formed from the heap of input - determines the internal algorithm of the neuron.

    For example, we will write a small program that will recognize simple images, say, letters of the Russian language on bitmap images.
    We agree that in the initial state, our system will have an "empty" memory, i.e. a kind of newborn brain, ready for battle.
    In order to make it work correctly, we will need to spend time on training.

    Dodging tomatoes flying into me, I’ll say that we will write in Delphi (at the time of writing this article was at hand). If necessary, I will help translate the example into other languages.

    I also ask you to take lightly the quality of the code - the program was written in an hour, just to deal with the topic, for serious tasks such a code is hardly applicable.

    So, based on the task - how many exit options can there be? That's right, as many letters as we will be able to determine. There are only 33 of them in the alphabet, and we will stop there.

    Next, we decide on the input data. In order not to bother too much, we will supply a 30x30 bitmap as a bitmap image:


    As a result, we need to create 33 neurons, each of which will have 30x30 = 900 inputs.
    Let's create a class for our neuron:

    type
      Neuron = class
        name: string; // Тут название нейрона – буква, с которой он ассоциируется
        input: array[0..29,0..29] of integer; // Тут входной массив 30х30
        output:integer; // Сюда он будет говорить, что решил 
        memory:array[0..29,0..29] of integer; // Тут он будет хранить опыт о предыдущем опыте
      end;
    


    Create an array of neurons, according to the number of letters:

    For i:=0 to 32 do begin
    neuro_web[i]:=Neuron.Create;
    neuro_web[i].output:=0; //  Пусть пока молчит
     neuro_web[i].name:=chr(Ord('A')+i); // Буквы от А до Я
    end;
    


    Now the question is - where will we store the "memory" of the neural network when the program does not work?
    In order not to go deep into INI or, God forbid, databases, I decided to store them in the same 30x30 raster images.
    For example, the memory of the neuron “K” after running the program in different fonts:



    As you can see, the most saturated areas correspond to the most common pixels.
    We will load the “memory” into each neuron when it is created:
    p:=TBitmap.Create;
         p.LoadFromFile(ExtractFilePath(Application.ExeName)+'\res\'+ neuro_web[i].name+'.bmp')
    


    At the beginning of the untrained program, the memory of each neuron will be a white spot 30x30.

    The neuron will be recognized as follows:

    - Take the 1st pixel
    - Compare it with the 1st pixel in the memory (there lies the value 0..255)
    - Compare the difference with a certain threshold
    - If the difference is less than the threshold - we consider that at this point the letter is similar to the one in memory , add +1 to the weight of the neuron.

    And so on all the pixels.

    The weight of a neuron is a certain number (in theory, up to 900), which is determined by the degree of similarity of the processed information with that stored in memory.
    At the end of recognition, we will have a set of neurons, each of which believes that it is right by a few percent. These percentages are the weight of the neuron. The more weight, the more likely that this particular neuron is right.

    Now we will feed the program an arbitrary image and run each neuron through it:

    for x:=0 to 29 do
                 for y:=0 to 29 do begin
                     n:=neuro_web[i].memory[x,y]; 
                     m:=neuro_web[i].input[x,y];
                     if ((abs(m-n)<120)) then // Порог разницы цвета
                    if m<250 then  neuro_web[i].weight:=neuro_web[i].weight+1; // Кроме того, не будем учитывать белые пиксели, чтобы не получать лишних баллов в весах
                    if m<>0 then   begin
                      if m<250 then   n:=round((n+(n+m)/2)/2);
                          neuro_web[i].memory[x,y]:=n;   end
                    else if n<>0 then
                      if m<250 then    n:=round((n+(n+m)/2)/2);
                   neuro_web[i].memory[x,y]:=n;
                 end;
    


    As soon as the cycle for the last neuron ends, we will choose from all the one with the greater weight:

    if neuro_web[i].weight>max then  begin
             max:=neuro_web[i].weight;
             max_n:=i;
             end;
    


    It is for this max_n value that the program will tell us that, in its opinion, we slipped it to her.
    At first, this will not always be true, so you need to make a learning algorithm.

    
    s:=InputBox('Enter the letter', ‘программа считает, что это буква ’+neuro_web[max_n].name, neuro_web[max_n].name);
         for i:=0 to 32 do     begin //Пробегаем по нейронам
         if neuro_web[i].name=s then begin //В нужном нейроне обновляем память
    for x:=0 to 29 do   begin
        for y:=0 to 29 do        begin
         p.Canvas.Pixels[x,y]:=RGB(neuro_web[i].memory[x,y],neuro_web[i].memory[x,y], neuro_web[i].memory[x,y]); //Записываем новое значение пикселя памяти
         end;
          end;
           p.SaveToFile(ExtractFilePath(Application.ExeName)+'\res\'+ neuro_web[i].name+'.bmp');
    


    The memory update itself will be done like this:

    n:=round(n+(n+m)/2);    
    


    Those. if this point is absent in the neuron’s memory, but the teacher says that it is in this letter, we remember it, but not completely, but only half. With further training, the degree of influence of this lesson will increase.

    Here are a few iterations for the letter G:



    At this, our program is ready.

    Training

    Let's start training.
    We open the image of the letters and patiently point out to the program its errors:



    After a while, the program will begin to stably determine even letters that are not familiar to it before:



    Conclusion

    The program is one continuous flaw - our neural network is very stupid, it is not protected from user errors during training, and recognition algorithms are simple as a stick.
    But it gives basic knowledge about the functioning of neural networks.

    If this article is of interest to respected hubs, then I will continue the cycle, gradually complicating the system, introducing additional connections and weights, consider some of the popular neural network architectures, etc.

    You can mock our freshly born intellect by downloading the program along with the source here .

    For sim, I bow, thanks for reading.

    UPD: We got a blank for a neural network. So far, this is not yet her, but in the next article we will try to make a full-fledged neural network out of it.
    Thanks to Shultc for the comment.

    Also popular now: