Neural network optimization methods

Published on January 04, 2017

Neural network optimization methods

    In the overwhelming majority of sources of information about neural networks, “now let's train our network” means “feed the target function to the optimizer” with only minimal tuning of the learning speed. It is sometimes said that updating the network weights can be not only a stochastic gradient descent, but without any explanation, what is remarkable, and other algorithms that mean mysterious \ inline \ betaand \ inline \ gammatheir parameters. Even teachers in machine learning courses often do not focus on this. I would like to correct the lack of information in RuNet about various optimizers that you may come across in modern machine learning packages . I hope my article will be useful to people who want to deepen their understanding of machine learning or even invent something of their own.


    image


    Under the cut a lot of pictures, including animated gifs.


    The article is aimed at a reader familiar with neural networks. It is assumed that you already understand the essence of backpropagation and SGD . I will not go into a rigorous proof of the convergence of the algorithms presented below, but rather, I will try to convey their ideas in simple language and show that the formulas are open for further experiments. The article lists far from all the difficulties of machine learning and not all ways to overcome them.


    Why tricks are needed


    Let me remind you how the formulas for the usual gradient descent look:


    \ Delta \ theta = - \ eta \ nabla_ \ theta J (\ theta)


    \ theta = \ theta + \ Delta \ theta = \ theta - \ eta \ nabla_ \ theta J (\ theta)


    where \ inline \ thetaare the network parameters, \ inline J (\ theta)is the objective function or loss function in the case of machine learning, and \ inline \ etais the learning rate. It looks surprisingly simple, but a lot of magic is hidden in \ inline \ nabla_ \ theta J (\ theta)- updating the parameters of the output layer is quite simple, but in order to get to the parameters of the layers behind it, you have to go through non-linearities, the derivatives of which contribute. This is the familiar backpropagation principle of error propagation.


    The explicitly written formulas for updating the scales somewhere in the middle of the network look scary, because each neuron depends on all the neurons with which it is connected, and those on all the neurons with which they are connected, and so on. Moreover, even in “toy” neural networks there can be about 10 layers, and among networks that hold the Olympus of classification of modern datasets - much, much more. Each weight is a variable in \ inline J (\ theta). Such an incredible number of degrees of freedom allows you to build very complex mappings, but it brings researchers a headache:


    • Jam at local minima or saddle points, of which there \ inline & gt;  10 ^ 6can be a lot of variables for a function of variables.
    • The complex landscape of the objective function: plateaus alternate with regions of strong non-linearity. The derivative on the plateau is almost zero, and a sudden break, on the contrary, can send us too far.
    • Some parameters are updated much less often than others, especially when informative, but rare signs are found in the data, which adversely affects the nuances of the generalizing network rule. On the other hand, giving too much importance to all rarely encountered signs in general can lead to retraining.
    • Too slow learning speed makes the algorithm converge for a very long time and get stuck in local lows, too fast to “fly through” narrow global lows or even diverge

    Computational mathematics knows advanced second-order algorithms that can find a good minimum on a difficult terrain, but then the number of weights strikes again. To use the honest second-order “head-on” method, you have to calculate the Hessian \ inline J (\ theta)- the matrix of derivatives for each pair of parameters of a pair of parameters (already bad) - and, say, for the Newton method, also the inverse of it. We have to invent all sorts of tricks to cope with problems, leaving the task computationally uplifting. Second-order working optimizers do exist , but for now, let's concentrate on what we can achieve without considering the second derivatives.


    Nesterov Accelerated Gradient


    The idea of ​​methods with the accumulation of momentum is obviously simple: “If we move in a certain direction for some time, then we should probably move there for some time in the future.” To do this, you need to be able to refer to the recent history of changes in each parameter. You can store the latest \ inline ninstances \ inline \ Delta \ thetaand honestly take the average at each step, but this approach takes up too much memory for large ones \ inline n. Fortunately, we do not need an exact average, but only an estimate, so we use an exponential moving average.


    v_t = \ gamma v_ {t-1} + (1- \ gamma) x


    In order to accumulate something, we will multiply the already accumulated value by the conservation coefficient \ inline 0 & lt;  \ gamma & lt;  oneand add the next value multiplied by \ inline 1- \ gamma. The closer \ inline \ gammato unity, the larger the accumulation window and the smoother the stronger - the story \ inline xbegins to influence more than each successive one \ inline x. If at \ inline x = 0some point, they \ inline v_tdecay exponentially, exponentially, hence the name. We apply the exponential running average to accumulate the gradient of the objective function of our network:


    v_t = \ gamma v_ {t-1} + \ eta \ nabla_ \ theta J (\ theta)


    \ theta = \ theta - v_t


    Where the \ inline \ gammaorder is usually taken \ inline 0.9. Note that it is \ inline 1- \ gammanot lost, but is included in \ inline \ eta; sometimes you can also find a version of the formula with an explicit factor. The smaller \ inline \ gamma, the more the algorithm behaves like a regular SGD. To get a popular physical interpretation of equations, imagine a ball rolling down a hilly surface. If at the moment there \ inline twas a non-zero slope ( \ inline \ nabla_ \ theta J (\ theta)) under the ball , and then it hit a plateau, it will still continue to roll along this plateau. Moreover, the ball will continue to move a couple of updates in the same direction, even if the bias has changed to the opposite. Nevertheless, viscous friction acts on the ball and every second it loses \ inline 1- \ gammaits speed. This is what the momentum looks like for different\ inline \ gamma(hereinafter, the epochs are plotted along the X axis, and the gradient value and accumulated values ​​are plotted along Y ):


    image
    image
    image
    image


    Note that the accumulated \ inline v_tvalue can very much exceed the value of each \ inline \ eta \ nabla_ \ theta J (\ theta). A simple accumulation of momentum already gives a good result, but Nesterov goes further and applies the idea well-known in computational mathematics: looking ahead along the update vector. Since we are still going to shift by \ inline \ gamma v_ {t-1}, let's calculate the gradient of the loss function not at a point \ inline \ theta, but at \ inline \ theta - \ gamma v_ {t-1}. From here:


    v_t = \ gamma v_ {t-1} + \ eta \ nabla_ \ theta J (\ theta - \ gamma v_ {t-1})


    \ theta = \ theta - v_t


    Such a change allows you to “roll” faster if, aside, where we are heading, the derivative increases, and slower, if vice versa. This is especially evident \ inline \ gamma = 0.975for a graph with a sine.


    image
    image


    Looking ahead can play a trick with us if you set too large \ inline \ gammaand \ inline \ eta: we look so far that we miss areas with the opposite sign of the gradient:


    image


    However, sometimes this behavior may be desirable. Once again I will draw your attention to the idea - looking ahead - and not to execution. Nesterov's method (6) is the most obvious option, but not the only one. For example, you can use another trick from computational mathematics - stabilizing the gradient by averaging over several points along the straight line along which we move. So to speak:


    v_t = \ gamma v_ {t-1} + \ frac {\ eta} {3} \ big (\ nabla_ \ theta J (\ theta - \ frac {\ gamma v_ {t-1}} {3}) + \ nabla_ \ theta J (\ theta - \ frac {2 \ gamma v_ {t-1}} {3}) + \ nabla_ \ theta J (\ theta - \ gamma v_ {t-1}) \ big)


    Or so:


    v_t = \ gamma v_ {t-1} + \ frac {\ eta} {7} \ big (\ nabla_ \ theta J (\ theta - \ frac {\ gamma v_ {t-1}} {2}) + 2 \ nabla_ \ theta J (\ theta - \ frac {3 \ gamma v_ {t-1}} {4}) + 4 \ nabla_ \ theta J (\ theta - \ gamma v_ {t-1}) \ big)


    Such a technique can help in case of noisy target functions.


    We will not manipulate the argument of the objective function in subsequent methods (although, of course, no one bothers you with experimenting). Further for brevity


    g_t \ equiv \ nabla_ \ theta J (\ theta_t)


    Adagrad


    How many methods of accumulation of momentum work, many imagine. Let's move on to more interesting optimization algorithms. Let's start with the relatively simple Adagrad - adaptive gradient.


    Some signs may be extremely informative, but rare. Exotic high-paying profession, a fancy word in a spam database - they will easily drown in the noise of all other updates. This is not only about rarely encountered input parameters. Say, you may well meet rare graphic patterns that turn into a sign only after passing through several layers of the convolution network. It would be nice to be able to update the parameters with an eye to how typical a characteristic they are. This is not difficult to achieve: let's keep for each network parameter the sum of the squares of its updates. It will act as a proxy for typicality: if the parameter belongs to a chain of frequently activated neurons, it is constantly pulled back and forth, which means the amount quickly accumulates. We rewrite the update formula like this:


    G_ {t} = G_ {t} + g_ {t} ^ 2


    \ theta_ {t + 1} = \ theta_ {t} - \ frac {\ eta} {\ sqrt {G_ {t} + \ epsilon}} g_ {t}


    Where \ inline G_ {t}is the sum of the squares of the updates, and \ inline \ epsilonis the smoothing parameter necessary to avoid dividing by 0. The parameter often updated in the past has a large parameter \ inline G_t, which means a large denominator in (12). The parameter that changes only once or twice will be updated in full force. \ inline \ epsilonthey take order \ inline 10 ^ {- 6}or \ inline 10 ^ {- 8}for a very aggressive update, but, as can be seen from the graphs, this plays a role only at the beginning, closer to the middle the training begins to outweigh \ inline G_t:


    image
    image


    So Adagrad’s idea is to use something to reduce updates for items that we update so often. Nobody forces us to use this formula specifically, therefore Adagrad is sometimes called a family of algorithms. Let's say we can remove the root or accumulate not the squares of updates, but their modules, or even replace the multiplier with something like that \ inline e ^ {- G_ {t}}.


    (Another thing is that this requires experimentation. If you remove the root, updates will begin to decrease too quickly, and the algorithm will deteriorate)


    Another advantage of Adagrad is the lack of the need to accurately choose the speed of training. It is enough to put it in a measure large enough to provide a good supply, but not so huge that the algorithm diverges. In fact, we automatically get the attenuation of the learning rate (learning rate decay).


    RMSProp and Adadelta


    The disadvantage of Adagrad is that \ inline G_ {t}in (12) it can increase as much as desired, which after a while leads to too small updates and paralysis of the algorithm. RMSProp and Adadelta are designed to fix this shortcoming.


    We are modifying the idea of ​​Adagrad: we are still going to update less weight, which are updated too often, but instead of the full amount of updates, we will use the history-averaged gradient square. We again use the exponentially decaying running average
    (4). Let be the \ inline E [g ^ 2] _trunning average at the moment\ inline t


    E [g ^ 2] _t = \ gamma E [g ^ 2] _ {t-1} + (1 - \ gamma) g ^ 2_t


    then instead of (12) we get


    \ theta_ {t + 1} = \ theta_ {t} - \ frac {\ eta} {\ sqrt {E [g ^ 2] _t + \ epsilon}} g_ {t}


    The denominator is the root of the mean square gradients, hence RMSProp - root mean square propagation


    RMS [g] _t = \ sqrt {E [g ^ 2] _t + \ epsilon}


    image
    image
    image


    Notice how the refresh rate is restored on the chart with long teeth for different ones \ inline \ gamma. Also compare the graphs with the meander for Adagrad and RMSProp: in the first case, updates are reduced to zero, and in the second they go to a certain level.


    That's the whole RMSProp. Adadelta differs from it in that we add to the numerator (14) the stabilizing term proportional \ inline RMSto \ inline \ Delta \ theta_t. At the step, \ inline twe still do not know the value \ inline RMS [\ Delta \ theta] _ {t}, therefore the parameters are updated in three stages, and not in two: first we accumulate the gradient square, then update \ inline \ theta, and then update \ inline RMS [\ Delta \ theta].


    \ Delta \ theta = - \ frac {RMS [\ Delta \ theta] _ {t-1}} {RMS [g] _ {t}} g_ {t}


    \ theta_ {t + 1} = \ theta_ {t} - \ frac {RMS [\ Delta \ theta] _ {t-1}} {RMS [g] _ {t}} g_ {t}


    E [\ Delta \ theta ^ 2] _t = \ gamma E [\ Delta \ theta ^ 2] _ {t-1} + (1 - \ gamma) \ Delta \ theta ^ 2_t


    RMS [\ Delta \ theta] _ {t} = \ sqrt {E [\ Delta \ theta ^ 2] _t + \ epsilon}


    Such a change is made for reasons of that dimension \ inline \ thetaand \ inline \ Delta \ thetashould be the same. Note that the learning rate does not have a dimension, which means that in all the algorithms before that, we added the dimension value to the dimensionless one. Physicists in this place are horrified, and we shrug: it works.


    Note that we need a non-zero \ inline RMS [\ Delta \ theta] _ {- 1}for the first step, otherwise all subsequent ones \ inline \ Delta \ theta, and therefore \ inline RMS [\ Delta \ theta] _ {t}will be equal to zero. But we solved this problem even earlier, adding in \ inline RMS \ inline \ epsilon. It’s another matter that without obvious big \ inline RMS [\ Delta \ theta] _ {- 1}we get the behavior opposite to Adagrad and RMSProp: we will be stronger (to a certain limit) to update weights that are used more often . Indeed, now to \ inline \ Delta \ thetabecome significant, the parameter must accumulate a large amount in the numerator of the fraction.


    Here are the graphs for zero starting \ inline RMS [\ Delta \ theta]:


    image
    image


    But for the big one:


    image
    image


    However, it seems that the authors of the algorithm achieved this effect. For RMSProp and Adadelta, as well as for Adagrad, you do not need to very accurately select the learning speed - just an approximate value. It is usually advised to start fitting \ inline \ etac \ inline 0.1 - 1, \ inline \ gammaand leave it that way \ inline 0.9. The closer \ inline \ gammato \ inline 1, the longer RMSProp and Adadelta with large \ inline RMS [\ Delta \ theta] _ {- 1}will greatly update under- used weights. If \ inline \ gamma \ approx 1so \ inline RMS [\ Delta \ theta] _ {- 1} = 0, then Adadelta will for a long time “with distrust” relate to rarely used scales. The latter can lead to paralysis of the algorithm, or it can intentionally cause “greedy” behavior when the algorithm first updates the neurons encoding the best signs.


    Adam


    Adam is an adaptive moment estimation, another optimization algorithm. It combines the idea of ​​the accumulation of motion and the idea of ​​a weaker updating of the scales for typical signs. Recall again (4):


    m_t = \ beta_1 m_ {t-1} + (1 - \ beta_1) g_t


    Adam differs from Nesterov in that we do not accumulate \ inline \ Delta \ theta, but the values ​​of the gradient, although this is a purely cosmetic change, see (23). In addition, we want to know how often the gradient changes. The authors of the algorithm proposed for this to evaluate the average noncentered variance:


    v_t = \ beta_2 v_ {t-1} + (1 - \ beta_2) g_t ^ 2


    It is easy to see that this is already familiar to us \ inline E [g ^ 2] _t, so in fact there are no differences from RMSProp.


    An important difference is in the initial calibration \ inline m_tand \ inline v_t: they suffer from the same problem as \ inline E [g ^ 2] _tin RMSProp: if you set a zero initial value, they will accumulate for a long time, especially with a large accumulation window ( \ inline 0 \ ll \ beta_1 & lt;  one, \ inline 0 \ ll \ beta_2 & lt;  one), and some initial values ​​are still two hyperparameters. No one wants another two hyperparameters, so we artificially increase \ inline m_tand \ inline v_tthe first steps (approximately \ inline 0 & lt;  t & lt;  tenfor \ inline m_tand \ inline 0 & lt;  t & lt;  1000for \ inline v_t)


    \ hat {m} _t = \ frac {m_t} {1 - \ beta ^ t_1}, \;  \ hat {v} _t = \ frac {v_t} {1 - \ beta ^ t_2}


    As a result, the update rule:


    \ theta_ {t + 1} = \ theta_ {t} - \ dfrac {\ eta} {\ sqrt {\ hat {v} _t + \ epsilon}} \ hat {m} _t


    image
    image
    image


    Here you should carefully look at how quickly the update values ​​on the first teeth of the graphs with the rectangles were synchronized and on the smoothness of the update curve on the graph with a sine — we received it “for free”. With the recommended parameter \ inline \ beta_1on the graph with spikes, it is clear that sharp bursts of the gradient do not cause an instant response in the accumulated value, therefore well-tuned Adam does not need gradient clipping.


    The authors of the algorithm derive (22) by expanding the recursive formulas (20) and (21). For example, for \ inline v_t:


    E [v_t] & amp; = E \ Bigg [(1 - \ beta_2) \ sum_ {i = 1} ^ {t} {\ beta_2 ^ {ti} g_i ^ 2} \ Bigg]


    = E [g_t ^ 2] (1 - \ beta_2) \ sum_ {i = 1} ^ {t} \ beta_2 ^ {ti} + \ zeta


    = E [g_t ^ 2] (1 - \ beta_2) \ frac {1 - \ beta_2 ^ t} {1 - \ beta_2} + \ zeta


    = E [g_t ^ 2] (1 - \ beta_2 ^ t) + \ zeta


    The term is \ inline \ zetaclose to \ inline 0with a stationary distribution \ inline p (g), which is not true in cases of practical interest to us. but we still move the bracket from \ inline \ beta_2 ^ tleft. Informally, we can imagine that with \ inline t = 0us an endless history of identical updates:


    \ hat {v} _1 = v_ {1} + \ beta_2 v_ {1} + \ beta_2 ^ 2 v_ {1} + \ ldots = \ frac {v_ {1}} {1 - \ beta_2}


    When we get closer to the correct value \ inline v, we make the “virtual” part of the series fade out faster:


    \ hat {v} _2 = v_ {2} + \ beta_2 ^ 2 v_ {2} + \ beta_2 ^ 4 v_ {2} + \ ldots = \ frac {v_ {2}} {1 - \ beta_2 ^ 2}


    Adam authors propose default values \ inline \ beta_1 = 0.9, \ beta_2 = 0.999, \ epsilon = 10 ^ {- 8}and claim that the algorithm performs better or approximately the same as all previous algorithms on a wide range of datasets due to the initial calibration. Note that again, equations (22) are not carved into stone. We have some theoretical justification why the attenuation should look like this, but no one forbids experimenting with calibration formulas. In my opinion, it just begs to apply looking ahead, as in the Nesterov method.


    Adamax


    Adamax is just such an experiment, proposed in the same article. Instead of dispersion, in (21) we can consider the inertial moment of the distribution of gradients of an arbitrary degree \ inline p. This can lead to instability in computing. However, a case \ inline ptending to infinity works surprisingly well.


    v_t = \ beta_2 ^ p v_ {t-1} + (1 - \ beta_2 ^ p) | g_t | ^ p


    Note that a \ inline \ beta_2suitable dimension is used instead \ inline \ beta_2 ^ p. Also, pay attention to use Adam value formulas obtained in (27), to be taken from his root: \ inline u_t = v_t ^ {\ frac {1} {p}}. We derive the decision rule instead of (21), taking \ inline p \ rightarrow \ infty, unfolding under the root \ inline v_twith the help of (27):


    u_t = \ lim_ {p \ rightarrow \ infty} {v_t ^ {\ frac {1} {p}}}


    = \ lim_ {p \ rightarrow \ infty} {\ Big [\ beta_2 ^ p v_ {t-1} + (1 - \ beta_2 ^ p) | g_t | ^ p \ Big] ^ {\ frac {1} {p }}}


    = \ lim_ {p \ rightarrow \ infty} {\ Big [(1 - \ beta_2 ^ p) \ sum_ {i = 1} ^ {t} \ beta_2 ^ {p (ti)} | g_i | ^ {p} \ Big] ^ {\ frac {1} {p}}}


    = \ lim_ {p \ rightarrow \ infty} {(1 - \ beta_2 ^ p) ^ {\ frac {1} {p}} \ Big (\ sum_ {i = 1} ^ {t} \ beta_2 ^ {p ( ti)} | g_i | ^ {p} \ Big) ^ {\ frac {1} {p}}}


    = \ lim_ {p \ rightarrow \ infty} {\ Big (\ sum_ {i = 1} ^ {t} \ beta_2 ^ {p (ti)} | g_i | ^ {p} \ Big) ^ {\ frac {1 } {p}}}


    = max (\ beta_2 ^ {t-1} | g_1 |, \ beta_2 ^ {t-2} | g_2 |, \ ldots, \ beta_2 | g_ {t-1} |, | g_t |)


    This happened because when \ inline p \ rightarrow \ inftyin the sum in (28) the largest term will dominate. Informally, you can intuitively understand why this is so, taking a simple sum and great \ inline p: \ inline \ sqrt [100] {10 ^ {100} + 9 ^ {100}} = 10 \ sqrt [100] {1 + \ frac {9} {10} ^ {100}} = 10 \ sqrt [10] {1.00002} \ approx 10. Not at all scary.


    The remaining steps of the algorithm are the same as in Adam.


    Experiments


    Now let's look at different algorithms in action. To make it clearer, let's look at the trajectory of the algorithms in the problem of finding the minimum of the function of two variables. Let me remind you that training a neural network is essentially the same, but there are significantly more than two variables and instead of an explicitly defined function, we only have a set of points on which we want to build this function. In our case, the loss function is the objective function along which optimizers move. Of course, in such a simple task it is impossible to feel the full power of advanced algorithms, but intuitively.


    First, let's look at the accelerated Nesterov gradient with different values \ inline \ gamma. Having understood why they look like this, it’s easier to understand the behavior of all other algorithms with the accumulation of momentum, including Adam and Adamax.


    image
    image


    All trajectories end in the same pool, but they do it differently. With a small \ inline \ gammaalgorithm, it becomes like a regular SGD, at each step the descent goes towards a decreasing gradient. With too much \ inline \ gamma, the history of changes begins to strongly influence, and the trajectory can "walk" strongly. Sometimes this is good: the larger the accumulated momentum, the easier it is to break out of the troughs of local minima on the way.


    image
    image


    Sometimes it’s bad: you can easily lose momentum by slipping into the hollow of a global minimum and settle in a local one. Therefore, for large ones, \ inline \ gammaone can sometimes see how the losses on the training set first reach a global minimum, then increase very much, then again begin to drop, but never return to the past minimum.


    image
    image


    Now we will consider the different algorithms launched from one point.


    image
    image


    image
    image


    As you can see, they all converge quite well (with a minimum selection of the learning speed). Pay attention to what big steps Adam and RMSProp take at the beginning of training. This happens because from the very beginning there were no changes in any parameter (in any coordinate) and the sums in the denominators (14) and (23) are equal to zero. Here the situation is more complicated:


    image
    image


    Apart from Adam, everyone was locked in a local minimum. Compare the behavior of the Nesterov method and, say, RMSProp in these graphs. Nesterov’s accelerated gradient, with any \ inline \ gamma, falling into a local minimum, whirls around for a while, then loses momentum and decays at some point. RMSProp draws characteristic "hedgehogs". This is also related to the sum in the denominator (14) - in the trap, the gradient squares are small and the denominator becomes small again. The magnitude of the jumps is also affected, obviously, by the speed of training (the more \ inline \ eta, the more jumps)\ inline \ epsilon(the less, the more). Adagrad does not show this behavior, since this algorithm has a sum over the entire history of gradients, and not over the window. Usually this is a desirable behavior, it allows you to jump out of traps, but occasionally in this way the algorithm escapes from the global minimum, which again leads to an irreparable deterioration of the algorithm in the training set.


    Finally, note that even though all these optimizers can find a way to a minimum even along a plateau with a very small slope or escape from a local minimum, if they have already gained momentum before, a bad starting point leaves them no chance:


    image
    image


    image
    image


    Conclusion


    So, we examined some of the most popular first-order neural network optimizers. I hope these algorithms no longer seem like a magical black box with a bunch of mysterious parameters, and now you can make an informed decision about which of the optimizers to use in your tasks.


    Finally, I’ll clarify one important point: it is unlikely that changing the algorithm for updating the scales with one swipe will solve all your problems with the neural network. Of course, the growth in the transition from SGD to something else will be obvious, but most likely the learning history for the algorithms described in the article, for relatively simple datasets and network structures, will look something like this:


    image


    ... not too impressive. I would suggest keeping Adam as the "golden hammer", as it gives the best results with minimal adjustment of parameters. When the network is already more or less debugged, try the Nesterov method with different parameters. Sometimes with it you can achieve better results, but it is relatively sensitive to changes in the network. Plus or minus a couple of layers and you need to look for a new optimal learning rate. Consider the rest of the algorithms and their parameters as a few more knobs and toggle switches that can be pulled in some special cases.


    If you want some custom graphs with gradients, use this python script (requires python> 3.4, numpy and matplotlib):


    Code
    from matplotlib import pyplot as plt
    import numpy as np
    from math import ceil, floor
    def linear_interpolation(X, idx):
        idx_min = floor(idx)
        idx_max = ceil(idx)
        if idx_min == idx_max or idx_max >= len(X):
            return X[idx_min]
        elif idx_min < 0:
            return X[idx_max]
        else:
            return X[idx_min] + (idx - idx_min)*X[idx_max]
    def EDM(X, gamma, lr=0.25):
        Y = []
        v = 0
        for x in X:
            v = gamma*v + lr*x
            Y.append(v)
        return np.asarray(Y)
    def NM(X, gamma, lr=0.25):
        Y = []
        v = 0
        for i in range(len(X)):
            v = gamma*v + lr*(linear_interpolation(X, i+gamma*v) if i+gamma*v < len(X) else 0)
            Y.append(v)
        return np.asarray(Y)
    def SmoothedNM(X, gamma, lr=0.25):
        Y = []
        v = 0
        for i in range(len(X)):
            lookahead4 = linear_interpolation(X, i+gamma*v/4)   if i+gamma*v/4      < len(X) else 0
            lookahead3 = linear_interpolation(X, i+gamma*v/2)   if i+gamma*v/2      < len(X) else 0
            lookahead2 = linear_interpolation(X, i+gamma*v*3/4) if i+gamma*v*3/4    < len(X) else 0
            lookahead1 = linear_interpolation(X, i+gamma*v)     if i+gamma*v        < len(X) else 0
            v = gamma*v + lr*(lookahead4 + lookahead3 + lookahead2 + lookahead1)/4
            Y.append(v)
        return np.asarray(Y)
    def Adagrad(X, eps, lr=2.5):
        Y = []
        G = 0
        for x in X:
            G += x*x
            v = lr/np.sqrt(G + eps)*x
            Y.append(v)
        return np.asarray(Y)
    def RMSProp(X, gamma, lr=0.25, eps=0.00001):
        Y = []
        EG = 0
        for x in X:
            EG = gamma*EG + (1-gamma)*x*x
            v = lr/np.sqrt(EG + eps)*x
            Y.append(v)
        return np.asarray(Y)
    def Adadelta(X, gamma, lr=50.0, eps=0.001):
        Y = []
        EG = 0
        EDTheta = lr
        for x in X:
            EG = gamma*EG + (1-gamma)*x*x
            v = np.sqrt(EDTheta + eps)/np.sqrt(EG + eps)*x
            Y.append(v)
            EDTheta = gamma*EDTheta + (1-gamma)*v*v
        return np.asarray(Y)
    def AdadeltaZeroStart(X, gamma, eps=0.001):
        return Adadelta(X, gamma, lr=0.0, eps=eps)
    def AdadeltaBigStart(X, gamma, eps=0.001):
        return Adadelta(X, gamma, lr=50.0, eps=eps)
    def Adam(X, beta1, beta2=0.999, lr=0.25, eps=0.0000001):
        Y = []
        m = 0
        v = 0
        for i, x in enumerate(X):
            m = beta1*m + (1-beta1)*x
            v = beta2*v + (1-beta2)*x*x
            m_hat = m/(1- pow(beta1, i+1) )
            v_hat = v/(1- pow(beta2, i+1) )
            dthetha = lr/np.sqrt(v_hat + eps)*m_hat
            Y.append(dthetha)
        return np.asarray(Y)
    np.random.seed(413)
    X = np.arange(0, 300)
    D_Thetha_spikes = np.asarray( [int(x%60 == 0) for x in X])
    D_Thetha_rectangles = np.asarray( [2*int(x%40 < 20) - 1 for x in X])
    D_Thetha_noisy_sin = np.asarray( [np.sin(x/20) + np.random.random() - 0.5 for x in X])
    D_Thetha_very_noisy_sin = np.asarray( [np.sin(x/20)/5 + np.random.random() - 0.5 for x in X])
    D_Thetha_uneven_sawtooth = np.asarray( [ x%20/(15*int(x > 80) + 5) for x in X])
    D_Thetha_saturation = np.asarray( [ int(x % 80 < 40) for x in X])
    for method_label, method, parameter_step in [
                    ("GRAD_Simple_Momentum", EDM, [0.25, 0.9, 0.975]),
                    ("GRAD_Nesterov", NM, [0.25, 0.9, 0.975]),
                    ("GRAD_Smoothed_Nesterov", SmoothedNM, [0.25, 0.9, 0.975]),
                    ("GRAD_Adagrad", Adagrad, [0.0000001, 0.1, 10.0]),
                    ("GRAD_RMSProp", RMSProp, [0.25, 0.9, 0.975]),
                    ("GRAD_AdadeltaZeroStart", AdadeltaZeroStart, [0.25, 0.9, 0.975]),
                    ("GRAD_AdadeltaBigStart", AdadeltaBigStart, [0.25, 0.9, 0.975]),
                    ("GRAD_Adam", Adam, [0.25, 0.9, 0.975]),
                ]:
        for label, D_Thetha in [("spikes", D_Thetha_spikes),
                                ("rectangles", D_Thetha_rectangles),
                                ("noisy sin", D_Thetha_noisy_sin),
                                ("very noisy sin", D_Thetha_very_noisy_sin),
                                ("uneven sawtooth", D_Thetha_uneven_sawtooth),
                                ("saturation", D_Thetha_saturation), ]:
            fig = plt.figure(figsize=[16.0, 9.0])
            ax = fig.add_subplot(111)
            ax.plot(X, D_Thetha, label="gradient")
            for gamma in parameter_step:
                Y = method(D_Thetha, gamma)
                ax.plot(X, Y, label="param="+str(gamma))
            ax.spines['bottom'].set_position('zero')
            full_name = method_label + "_" + label
            plt.xticks(np.arange(0, 300, 20))
            plt.grid(True)
            plt.title(full_name)
            plt.xlabel('epoch')
            plt.ylabel('value')
            plt.legend()
            # plt.show(block=True) #Uncoomment and comment next line if you just want to watch
            plt.savefig(full_name)
            plt.close(fig)

    If you want to experiment with algorithm parameters and your own functions, use this to create your own minimizer trajectory animation (theano / lasagne is also required):


    Another code
    import numpy as np
    import matplotlib
    matplotlib.use("Agg")
    import matplotlib.pyplot as plt
    import matplotlib.animation as animation
    import theano
    import theano.tensor as T
    from lasagne.updates import nesterov_momentum, rmsprop, adadelta, adagrad, adam
    #For reproducibility. Comment it out for randomness
    np.random.seed(413)
    #Uncoomment and comment next line if you want to try random init
    # clean_random_weights = scipy.random.standard_normal((2, 1))
    clean_random_weights = np.asarray([[-2.8], [-2.5]])
    W = theano.shared(clean_random_weights)
    Wprobe = T.matrix('weights')
    levels = [x/4.0 for x in range(-8, 2*12, 1)] + [6.25, 6.5, 6.75, 7] + \
             list(range(8, 20, 1))
    levels = np.asarray(levels)
    O_simple_quad = (W**2).sum()
    O_wobbly = (W**2).sum()/3 + T.abs_(W[0][0])*T.sqrt(T.abs_(W[0][0]) + 0.1) + 3*T.sin(W.sum()) + 3.0 + 8*T.exp(-2*((W[0][0] + 1)**2+(W[1][0] + 2)**2))
    O_basins_and_walls = (W**2).sum()/2 + T.sin(W[0][0]*4)**2
    O_ripple = (W**2).sum()/3 + (T.sin(W[0][0]*20)**2 + T.sin(W[1][0]*20)**2)/15
    O_giant_plateu = 4*(1-T.exp(-((W[0][0])**2+(W[1][0])**2)))
    O_hills_and_canyon = (W**2).sum()/3 + \
                         3*T.exp(-((W[0][0] + 1)**2+(W[1][0] + 2)**2)) + \
                           T.exp(-1.5*(2*(W[0][0] + 2)**2+(W[1][0] -0.5)**2)) + \
                         3*T.exp(-1.5*((W[0][0] -1)**2+2*(W[1][0] + 1.5)**2)) + \
                         1.5*T.exp(-((W[0][0] + 1.5)**2+3*(W[1][0] + 0.5)**2)) + \
                         4*(1 - T.exp(-((W[0][0] + W[1][0])**2)))
    O_two_minimums = 4-0.5*T.exp(-((W[0][0] + 2.5)**2+(W[1][0] + 2.5)**2))-3*T.exp(-((W[0][0])**2+(W[1][0])**2))
    nesterov_testsuit = [
                    (nesterov_momentum, "nesterov momentum 0.25",    {"learning_rate": 0.01, "momentum": 0.25}),
                    (nesterov_momentum, "nesterov momentum 0.9",     {"learning_rate": 0.01, "momentum": 0.9}),
                    (nesterov_momentum, "nesterov momentum 0.975",   {"learning_rate": 0.01, "momentum": 0.975})
                ]
    cross_method_testsuit = [
                    (nesterov_momentum, "nesterov",     {"learning_rate": 0.01}),
                    (rmsprop,           "rmsprop",      {"learning_rate": 0.25}),
                    (adadelta,          "adadelta",     {"learning_rate": 100.0}),
                    (adagrad,           "adagrad",      {"learning_rate": 1.0}),
                    (adam,              "adam",         {"learning_rate": 0.25})
                ]
    for O, plot_label in [
               (O_wobbly, "Wobbly"),
               (O_basins_and_walls, "Basins_and_walls"),
               (O_giant_plateu, "Giant_plateu"),
               (O_hills_and_canyon, "Hills_and_canyon"),
               (O_two_minimums, "Bad_init")
            ]:
        result_probe = theano.function([Wprobe], O, givens=[(W, Wprobe)])
        history = {}
        for method, history_mark, kwargs_to_method in cross_method_testsuit:
            W.set_value(clean_random_weights)
            history[history_mark] = [W.eval().flatten()]
            updates = method(O, [W], **kwargs_to_method)
            train_fnc = theano.function(inputs=[], outputs=O, updates=updates)
            for i in range(125):
                result_val = train_fnc()
                print("Iteration " + str(i) + " result: "+ str(result_val))
                history[history_mark].append(W.eval().flatten())
            print("-------- DONE {}-------".format(history_mark))
        delta = 0.05
        mesh = np.arange(-3.0, 3.0, delta)
        X, Y = np.meshgrid(mesh, mesh)
        Z = []
        for y in mesh:
            z = []
            for x in mesh:
                z.append(result_probe([[x], [y]]))
            Z.append(z)
        Z = np.asarray(Z)
        print("-------- BUILT MESH -------")
        fig, ax = plt.subplots(figsize=[12.0, 8.0])
        CS = ax.contour(X, Y, Z, levels=levels)
        plt.clabel(CS, inline=1, fontsize=10)
        plt.title(plot_label)
        nphistory = []
        for key in history:
            nphistory.append(
                    [np.asarray([h[0] for h in history[key]]),
                     np.asarray([h[1] for h in history[key]]),
                     key]
                )
        lines = []
        for nph in nphistory:
            lines += ax.plot(nph[0], nph[1], label=nph[2])
        leg = plt.legend()
        plt.savefig(plot_label + '_final.png')
        def animate(i):
            for line, hist in zip(lines, nphistory):
                line.set_xdata(hist[0][:i])
                line.set_ydata(hist[1][:i])
            return lines
        def init():
            for line, hist in zip(lines, nphistory):
                line.set_ydata(np.ma.array(hist[0], mask=True))
            return lines
        ani = animation.FuncAnimation(fig, animate, np.arange(1, 120), init_func=init,
                                      interval=100, repeat_delay=0, blit=True, repeat=True)
        print("-------- WRITING ANIMATION -------")
        # plt.show(block=True) #Uncoomment and comment next line if you just want to watch
        ani.save(plot_label + '.mp4', writer='ffmpeg_file', fps=5)
        print("-------- DONE {} -------".format(plot_label))

    PDF version of the article


    Thanks to Roman Parpalak for the tool , thanks to which working with formulas on the Habré is not so painful. You can see the article that I took as the basis of my article - there is a lot of useful information that I omitted in order to better focus on optimization algorithms. To further delve deeper into neural networks, I recommend this book.


    Have a nice New Year's weekend, Habr!