
Google Neural Network Gets Started
In June 2012, a group of researchers from Google launched a neural network on a cluster of 1000 computers (16 thousand processor cores; 1 billion connections between neurons). The experiment became one of the largest in the field of artificial intelligence, and the system was originally created to solve practical problems.
A self-learning neural network is a fairly universal tool that can be used on different data arrays. Google used it to improve speech recognition accuracy.“We got a 20-25% reduction in the number of recognition errors,” said Vincent Vanhoucke, head of speech recognition at Google. “This means that many people will get an unmistakable result.” The neural network has optimized algorithms for the English language, but Vanhawk says similar improvements can be made for other languages and dialects.
The neural network is also used in the Google Street View project to process small fragments of photographs where it is necessary to determine whether the number on the fragment is the house number or not. Surprisingly, in this task, the neural network shows better recognition accuracy than people.
In the future, the neural network will be used in other Google products, such as image search, Google Glass glasses and Google cars with unmanned control. Jeff Dean, a neural network project staffer, says that in a car, the system can take into account contextual information, including information from laser rangefinders or, for example, the sound of a motor. Jeff Dean says that a powerful neural network is able to use a lot of contextual information in the training process - for this reason they decided to create such a large cluster of 1000 servers, while most researchers test neural networks on one computer.
The first results of the experiment with the Google neural network were published in June 2012. Tests have shown that the neural network is amenable to self-learning. After viewing 10 million random frames from Youtube, neurons formed in the neural network that selectively respond to the presence of faces in the images. According to scientists, the Google neural network in the process of self-learning worked in much the same way as neurons in the visual cortex of the mammalian brain. With the caveat that the Google neural network, in spite of its scale, is still much smaller in the number of nodes than the neural network of the visual cortex.
The illustration below shows a composite image that corresponds to the optimal stimulus for a cat classification neuron during the first experiment.

A composite image that corresponds to the optimal stimulus upon activation of the neuron-classifier of the human face.

A self-learning neural network is a fairly universal tool that can be used on different data arrays. Google used it to improve speech recognition accuracy.“We got a 20-25% reduction in the number of recognition errors,” said Vincent Vanhoucke, head of speech recognition at Google. “This means that many people will get an unmistakable result.” The neural network has optimized algorithms for the English language, but Vanhawk says similar improvements can be made for other languages and dialects.
The neural network is also used in the Google Street View project to process small fragments of photographs where it is necessary to determine whether the number on the fragment is the house number or not. Surprisingly, in this task, the neural network shows better recognition accuracy than people.
In the future, the neural network will be used in other Google products, such as image search, Google Glass glasses and Google cars with unmanned control. Jeff Dean, a neural network project staffer, says that in a car, the system can take into account contextual information, including information from laser rangefinders or, for example, the sound of a motor. Jeff Dean says that a powerful neural network is able to use a lot of contextual information in the training process - for this reason they decided to create such a large cluster of 1000 servers, while most researchers test neural networks on one computer.
The first results of the experiment with the Google neural network were published in June 2012. Tests have shown that the neural network is amenable to self-learning. After viewing 10 million random frames from Youtube, neurons formed in the neural network that selectively respond to the presence of faces in the images. According to scientists, the Google neural network in the process of self-learning worked in much the same way as neurons in the visual cortex of the mammalian brain. With the caveat that the Google neural network, in spite of its scale, is still much smaller in the number of nodes than the neural network of the visual cortex.
The illustration below shows a composite image that corresponds to the optimal stimulus for a cat classification neuron during the first experiment.

A composite image that corresponds to the optimal stimulus upon activation of the neuron-classifier of the human face.
