The battle of the titans of our time: the controversy of V. Vapnik and L. Jakel about the future of SVM and neural networks

Published on June 18, 2018

The battle of the titans of our time: the controversy of V. Vapnik and L. Jakel about the future of SVM and neural networks

    Memories of how Niels Bohr argued with Albert Einstein, and George Westinghouse and Nikola Tesla with Thomas Edison, have long become legends. These scientific discussions have not been forgotten, because, on the one hand, only time was able to resolve them. On the other hand, their outcome determined the development of technology for decades to come. Do such discussions exist today? Exist. And they are as hot and interesting as a hundred years ago.

    Perhaps the most interesting controversy of our time is the controversy of Vladimir Vapnik (the inventor of the support vector method or SVM support vector machine), with Larry Jakel, his boss at Bell Labs and a supporter of convolutional neural networks.

    Photos from Jan LeKunna blog on Google+

    Their documented betting leads on his Google+ page Jan LeKunn (the ideologist of convolutional neural networks), who acted as a judge of the dispute and with his own signature assured all the conditions. And they consisted in the following.

    Image of the document is taken from the blog by Jan LeKunna Google+

    Larry Jakel argued that by the year 2000, a theoretical understanding of why large neural networks would work well (in terms of the limits of applicability and conditions similar to those described for SVM operation) would be developed.

    Vladimir Vapnik claimed that by 2005 no one in his right mind would use the neural networks of the same architectures that had been proposed as far back as 1995, all would go to the support vector machine.

    At stake was a gorgeous dinner, to pay for which should have been the one whose forecast does not come true.

    So, by the year 2000, there was no coherent theory of the operation of neural networks, they remained black boxes. However, people in their right mind continue to use the networks of the same architectures that were proposed almost a quarter of a century ago, despite the predictions of V. Vapnik.

    Lost both bettors. And for dinner at a restaurant for three (it was attended by Jan LeKunn), both paid.

    A beautiful story about a modern scientific discussion - is not this a new legend that will be recounted after decades and even centuries? But this is not about beauty, but about two camps of developers and researchers that are comparable in size and significance, and who argue with each other about whether it’s more efficient to use - mysterious neural networks or the support vectors method well studied and described in detail in the literature.

    At the end of the last century, few people believed in neural networks. Although CNN (convolutional neural networks) was developed at about the same time as SVM, between 1988 and 1992. Modern practice and scientific literature show that in our days, in the era of revolution, deep learning, neural networks are being used more and more actively and help to solve complex problems, for example, in the field of speech recognition. The SVM method is losing its popularity, but still remains in demand in some tasks.

    The “war of currents” (T. Edison and D. Westinghouse dispute) ended only in 2007, when the last consumer of direct current disappeared in New York. I want to believe that the discussion of V. Vapnik and L. Jakel will end more quickly, because its outcome seems to be clear. Today everyone is talking about deep learning revolution and that neural networks actually won in practice everything that existed before them. But! There remains one important point mentioned in this dispute - there is still no clear understanding of the boundaries and conditions of applicability of neural networks, there is no complete analytical description of all processes inside the “black boxes”. We are now seeing how the analyst goes into the background, giving way to engineering and empiricism.

    Will it ever be known how a black box called a “neural network” works? Time will tell. In the meantime, we argue, gentlemen!