36 materials about neural networks: books, articles and recent research
What to do if you want to know more about neural networks, pattern recognition methods, computer vision and deep learning? One of the obvious options is to find some courses for yourself and begin to actively study theory and solve practical problems. However, this will have to allocate a significant portion of personal time. There is another way - to turn to a “passive” source of knowledge: choose literature for yourself and immerse yourself in the topic, devoting only half an hour to an hour a day to this.
Therefore, in order to make life easier for ourselves and our readers, we made a short selection of books, articles and texts in the direction of neural networks and deep learning, recommended for reading by residents of GitHub, Quora, Reddit and other platforms. It included materials both for those who are just starting to get acquainted with neurotechnologies, and for colleagues who want to expand their knowledge in this area or just pick up an “easy reading” for the evening. / Flickr / giuseppe milo / cc
Any proposed list, no matter how long it may be, will have the main and defining feature - incompleteness. Because life does not stand still: both scientific thought and technology are developing, many problem statements are described, and the solutions obtained are disclosed in reporting publications of conferences, in magazines and collections. For those who are wondering what is happening at the moment and how the community lives, it is recommended to follow the materials of relevant events - ICML and NIPS .
Neural Networks and Deep Learning
This is a free online book by the scientist and programmer Michael Nielsen. The author reveals the topic of deep training of neural networks and answers such questions as: “Why is it difficult to train neural networks?”, “How does the back propagation algorithm work?”.
Book author: Tariq Rashid
Make Your Own Neural Network
The book reveals the mathematical principles underlying neural networks and suggests writing your own neural network in Python. The network will recognize handwritten numbers. The purpose of the book is to give the reader a clear understanding of how neural systems work, to make information more accessible.
A Brief Introduction to Neural Networks
The author of the book, a specialist in data analysis and machine learning, explains in simple language the principles of operation of neural networks. After reading, you can start working with neural systems yourself and understand someone else's code. The book is constantly improving, in updated versions, based on feedback from readers.
An Introduction to Statistical Learning An
introduction to statistical training. Target audience - students and graduates of universities, including non-mathematical specialties. Everything is very accessible and with tutorials on R.
Programming Collective Intelligence
The book tells how to analyze user experience and human behavior based on the information we receive daily. The proposed algorithms are accompanied by code that can be used immediately on a website or in an application. Each chapter includes practical exercises whose task is to strengthen and refine the algorithms.
Neural Networks: A Systematic Introduction
General theory on the creation of artificial neural networks. Each chapter contains examples, illustrations, and a bibliography. The book is suitable for those who want to deepen their knowledge in this field, but can also serve as a good base for courses on neurocomputing.
Deep Learning: Methods and Applications
A book from Microsoft Research with core deep learning methodologies. The authors talk about how neural networks are used in the processing of signals and information. Areas in which deep learning has already found active application, as well as areas where it can have a significant impact in the long run, are examined.
Deep Learning Tutorial
University of Montreal Publishing (Canada). Here are guides on the most important deep learning algorithms. The book shows how to implement them using Theano library. As the authors note, the reader should have an understanding of Python and NumPy, as well as take a course on handling Theano.
Pattern Recognition and Machine Learning
This is the first pattern recognition tutorial to introduce the Bayesian method. The book contains algorithms for approximate inference for situations in which it is impossible to get exact answers. Information is supported by graphical models to describe the probability distribution. The book is suitable for everyone, because its free reading does not require a thorough knowledge of the concepts of machine learning and pattern recognition.
Book author: Simon S Haykin
Neural Networks and Learning Machines
The book understands the concepts and principles of neural networks and self-learning machines. To date, the third edition has been released.
Hands-On Machine Learning
With illustrative examples, a minimum of theory, and two production-ready Python frameworks, the author helps to understand how intelligent systems are built. You will learn about various techniques: from simple linear regression to deep learning. Each chapter provides exercises to consolidate the acquired knowledge.
Neural Network Hacker Guide
Andrej Karpathy, Head of AI Development at Tesla, suggests looking into the past of neural networks and getting started with real-valued circuits technology. The author is also a teacher at the CS231 course at Stanford, whose materials are closely related to this article. Slides can be found here . And the notes are here .
Deep Learning, Natural Language Processing and Data Presentation
How to use deep neural networks to process natural language (NLP). The author also tries to answer the question of why neural networks work.
Deep Learning: A Guide
Java Developer Ivan Vasiliev introduces the key concepts and algorithms behind deep learning using the Java programming language. The deep learning Java library is here .
Origin of deep learning
This publication is a historical overview of the development of deep learning models. The authors begin the story with how the neural networks appeared, and smoothly move on to the technologies of the last decade: deep trust networks, convolutional and recurrent neural networks.
Reinforced Deep Learning: Overview The
material focuses on the latest achievements in the Reinforced Deep Learning (RL) industry. First, the authors turn to the principles of deep learning and reinforced learning, and then move on to the problems of their real applicability: games (AlphaGo), robotics, chat bots, etc.
/ Flickr / Brandur Øssursson / PD
Neural Networks for Applied Sciences and Engineering
Overview of neural network architectures for direct data analysis. In separate chapters, the authors discuss the applicability of self-organizing maps for clustering non-linear data, as well as the use of recurrent networks in science.
Neural networks. Full course
The book examines the paradigms of artificial neural networks with illustrations and examples of specific tasks. The role of neural networks in solving the problems of pattern recognition, control and signal processing is analyzed. The book will be useful for engineers, computer scientists, physicists, as well as for anyone interested in artificial neural networks.
Self-organizing cards
Self-organizing maps, together with their varieties, represent one of the most popular neural network architectures oriented to teaching without a teacher. The book gives a detailed account of the mathematical apparatus and applications for self-organizing maps. Suitable for specialists in the field of neuromodeling, as well as undergraduate and graduate students of universities.
The author of the book: Ian Goodfellow
Deep Learning (Adaptive Computation and Machine Learning series)
“Deep Learning” is the only comprehensive book in this area, ”these are the words of Elon Musk, co-founder of Tesla and SpaceX. The text has accumulated a mathematical background, discusses important concepts of linear algebra, probability theory, information theory and machine learning.
Neural Networks for Pattern Recognition
The book provides techniques for modeling probability density functions. Algorithms for minimizing the error function are considered, as well as the Bayesian method and its application. In addition, the authors have collected more than a hundred useful exercises under this cover.
Fast Learning Algorithm for Deep Trust Networks
The authors propose an algorithm that can train deep trust networks (DBMs) one layer at a time. You should also pay attention to the video tutorial on deep trust networks from one of the authors - Geoffrey Hinton (GE Hinton).
Teaching representations using the back propagation method of errors.
It is considered the basis of the concept of training neural networks. Historical background and implementation. Recommended reading.
Learning to generate chairs, tables and cars using convolutional networks. The
article shows that generative networks can find similarities between objects, having higher performance compared to competitive solutions. The concept presented in this article can also be used to generate faces.
TensorFlow Deep Learning Image Completion This
article describes how to use deep learning to complete images with DCGAN. The post is designed for a technical audience with a background in machine learning. The author has posted all the source code on GitHub .
Face Generator in Torch
The author implements a generative model that turns random “noise” into images of faces. This is done using a generative adversarial network (GAN).
A Practical Guide to Training Limited Boltzmann Machines
Overview of restricted Boltzmann machines. The authors give many recipes for debugging and improving the system: assigning weights, monitoring, choosing the number of hidden nodes.
Improving neural networks by preventing co-adaptation of feature detectors
When a large neural network is trained on a small training dataset, it usually produces poor results. The authors propose a method that should solve the problem of "retraining" by teaching neurons to identify signs that help generate the correct answer.
YOLO: real-time object detection
The authors demonstrate an approach to object recognition - YOLO (You Only Look Once). According to their idea, one neural network works with the image, which divides it into regions. Regions are outlined by boundary frames and “weighted” based on predicted probabilities. You can learn how to implement the “miniversion” of YOLO to work on mobile devices for iOS from this article .
How to predict unrecognizable images
One recent study has shown that changing an image (invisible to humans) can fool deep neural networks, causing the latter to set the wrong marker. This work sheds light on the interesting differences between human and machine vision.
Deep Voice: real-time text-to-speech conversion The
authors introduce Deep Voice, a text-to-speech system built on deep neural networks. According to scientists, each component has its own neural network, so their system is much faster than traditional solutions. It’s worth a touch.
PixelNet: Representation of pixels, by pixels and for pixels
The authors investigate the principles of generalization at the pixel level by proposing an algorithm that adequately shows itself in such tasks as semantic segmentation, boundary extraction and estimation of normals to surfaces.
OpenAI Generative Models
This post describes four projects that adapt generative models. The authors tell what it is, where they are used and why they are important.
Learning to generate chairs using convolutional neural networks.
Here we describe the process of training a generative convolutional neural network to generate images of objects by type and color. The network can interpolate rows of images and fill in the “empty spaces” with missing elements.
Generative-adversarial network in 50 lines of code
How to train the generative adversarial network (GAN)? You just need to take PyTorch and write 50 lines of code. Let's try it at leisure.
Which book is on the tables of many Neurodata Lab employees and can be considered one of my favorites?
Authors of the book: Amit Konar , Aruna Chakraborty
Emotion Recognition. A Pattern Analysis Approach
Excellent material, well-structured and supported by a wide range of sources and data. The book is suitable for everyone who is passionate about the problems of detection and recognition of emotions from a technical point of view, and those who are just looking for an exciting reading.
PS We understand that it is impossible to cover all available materials on this topic in the framework of one article. Therefore, if you are interested, then you can devote a fraction of your attention to collections on GitHub and other platforms. Here is some of them:
Therefore, in order to make life easier for ourselves and our readers, we made a short selection of books, articles and texts in the direction of neural networks and deep learning, recommended for reading by residents of GitHub, Quora, Reddit and other platforms. It included materials both for those who are just starting to get acquainted with neurotechnologies, and for colleagues who want to expand their knowledge in this area or just pick up an “easy reading” for the evening. / Flickr / giuseppe milo / cc
Current context
Any proposed list, no matter how long it may be, will have the main and defining feature - incompleteness. Because life does not stand still: both scientific thought and technology are developing, many problem statements are described, and the solutions obtained are disclosed in reporting publications of conferences, in magazines and collections. For those who are wondering what is happening at the moment and how the community lives, it is recommended to follow the materials of relevant events - ICML and NIPS .
And yet, where to start?
Neural Networks and Deep Learning
This is a free online book by the scientist and programmer Michael Nielsen. The author reveals the topic of deep training of neural networks and answers such questions as: “Why is it difficult to train neural networks?”, “How does the back propagation algorithm work?”.
Book author: Tariq Rashid
Make Your Own Neural Network
The book reveals the mathematical principles underlying neural networks and suggests writing your own neural network in Python. The network will recognize handwritten numbers. The purpose of the book is to give the reader a clear understanding of how neural systems work, to make information more accessible.
A Brief Introduction to Neural Networks
The author of the book, a specialist in data analysis and machine learning, explains in simple language the principles of operation of neural networks. After reading, you can start working with neural systems yourself and understand someone else's code. The book is constantly improving, in updated versions, based on feedback from readers.
An Introduction to Statistical Learning An
introduction to statistical training. Target audience - students and graduates of universities, including non-mathematical specialties. Everything is very accessible and with tutorials on R.
Programming Collective Intelligence
The book tells how to analyze user experience and human behavior based on the information we receive daily. The proposed algorithms are accompanied by code that can be used immediately on a website or in an application. Each chapter includes practical exercises whose task is to strengthen and refine the algorithms.
Neural Networks: A Systematic Introduction
General theory on the creation of artificial neural networks. Each chapter contains examples, illustrations, and a bibliography. The book is suitable for those who want to deepen their knowledge in this field, but can also serve as a good base for courses on neurocomputing.
Deep Learning: Methods and Applications
A book from Microsoft Research with core deep learning methodologies. The authors talk about how neural networks are used in the processing of signals and information. Areas in which deep learning has already found active application, as well as areas where it can have a significant impact in the long run, are examined.
Deep Learning Tutorial
University of Montreal Publishing (Canada). Here are guides on the most important deep learning algorithms. The book shows how to implement them using Theano library. As the authors note, the reader should have an understanding of Python and NumPy, as well as take a course on handling Theano.
Pattern Recognition and Machine Learning
This is the first pattern recognition tutorial to introduce the Bayesian method. The book contains algorithms for approximate inference for situations in which it is impossible to get exact answers. Information is supported by graphical models to describe the probability distribution. The book is suitable for everyone, because its free reading does not require a thorough knowledge of the concepts of machine learning and pattern recognition.
Book author: Simon S Haykin
Neural Networks and Learning Machines
The book understands the concepts and principles of neural networks and self-learning machines. To date, the third edition has been released.
Hands-On Machine Learning
With illustrative examples, a minimum of theory, and two production-ready Python frameworks, the author helps to understand how intelligent systems are built. You will learn about various techniques: from simple linear regression to deep learning. Each chapter provides exercises to consolidate the acquired knowledge.
Neural Network Hacker Guide
Andrej Karpathy, Head of AI Development at Tesla, suggests looking into the past of neural networks and getting started with real-valued circuits technology. The author is also a teacher at the CS231 course at Stanford, whose materials are closely related to this article. Slides can be found here . And the notes are here .
Deep Learning, Natural Language Processing and Data Presentation
How to use deep neural networks to process natural language (NLP). The author also tries to answer the question of why neural networks work.
Deep Learning: A Guide
Java Developer Ivan Vasiliev introduces the key concepts and algorithms behind deep learning using the Java programming language. The deep learning Java library is here .
Origin of deep learning
This publication is a historical overview of the development of deep learning models. The authors begin the story with how the neural networks appeared, and smoothly move on to the technologies of the last decade: deep trust networks, convolutional and recurrent neural networks.
Reinforced Deep Learning: Overview The
material focuses on the latest achievements in the Reinforced Deep Learning (RL) industry. First, the authors turn to the principles of deep learning and reinforced learning, and then move on to the problems of their real applicability: games (AlphaGo), robotics, chat bots, etc.
/ Flickr / Brandur Øssursson / PD
Advanced reading
Neural Networks for Applied Sciences and Engineering
Overview of neural network architectures for direct data analysis. In separate chapters, the authors discuss the applicability of self-organizing maps for clustering non-linear data, as well as the use of recurrent networks in science.
Neural networks. Full course
The book examines the paradigms of artificial neural networks with illustrations and examples of specific tasks. The role of neural networks in solving the problems of pattern recognition, control and signal processing is analyzed. The book will be useful for engineers, computer scientists, physicists, as well as for anyone interested in artificial neural networks.
Self-organizing cards
Self-organizing maps, together with their varieties, represent one of the most popular neural network architectures oriented to teaching without a teacher. The book gives a detailed account of the mathematical apparatus and applications for self-organizing maps. Suitable for specialists in the field of neuromodeling, as well as undergraduate and graduate students of universities.
The author of the book: Ian Goodfellow
Deep Learning (Adaptive Computation and Machine Learning series)
“Deep Learning” is the only comprehensive book in this area, ”these are the words of Elon Musk, co-founder of Tesla and SpaceX. The text has accumulated a mathematical background, discusses important concepts of linear algebra, probability theory, information theory and machine learning.
Neural Networks for Pattern Recognition
The book provides techniques for modeling probability density functions. Algorithms for minimizing the error function are considered, as well as the Bayesian method and its application. In addition, the authors have collected more than a hundred useful exercises under this cover.
Fast Learning Algorithm for Deep Trust Networks
The authors propose an algorithm that can train deep trust networks (DBMs) one layer at a time. You should also pay attention to the video tutorial on deep trust networks from one of the authors - Geoffrey Hinton (GE Hinton).
Teaching representations using the back propagation method of errors.
It is considered the basis of the concept of training neural networks. Historical background and implementation. Recommended reading.
Learning to generate chairs, tables and cars using convolutional networks. The
article shows that generative networks can find similarities between objects, having higher performance compared to competitive solutions. The concept presented in this article can also be used to generate faces.
TensorFlow Deep Learning Image Completion This
article describes how to use deep learning to complete images with DCGAN. The post is designed for a technical audience with a background in machine learning. The author has posted all the source code on GitHub .
Face Generator in Torch
The author implements a generative model that turns random “noise” into images of faces. This is done using a generative adversarial network (GAN).
A Practical Guide to Training Limited Boltzmann Machines
Overview of restricted Boltzmann machines. The authors give many recipes for debugging and improving the system: assigning weights, monitoring, choosing the number of hidden nodes.
Improving neural networks by preventing co-adaptation of feature detectors
When a large neural network is trained on a small training dataset, it usually produces poor results. The authors propose a method that should solve the problem of "retraining" by teaching neurons to identify signs that help generate the correct answer.
YOLO: real-time object detection
The authors demonstrate an approach to object recognition - YOLO (You Only Look Once). According to their idea, one neural network works with the image, which divides it into regions. Regions are outlined by boundary frames and “weighted” based on predicted probabilities. You can learn how to implement the “miniversion” of YOLO to work on mobile devices for iOS from this article .
How to predict unrecognizable images
One recent study has shown that changing an image (invisible to humans) can fool deep neural networks, causing the latter to set the wrong marker. This work sheds light on the interesting differences between human and machine vision.
Deep Voice: real-time text-to-speech conversion The
authors introduce Deep Voice, a text-to-speech system built on deep neural networks. According to scientists, each component has its own neural network, so their system is much faster than traditional solutions. It’s worth a touch.
PixelNet: Representation of pixels, by pixels and for pixels
The authors investigate the principles of generalization at the pixel level by proposing an algorithm that adequately shows itself in such tasks as semantic segmentation, boundary extraction and estimation of normals to surfaces.
OpenAI Generative Models
This post describes four projects that adapt generative models. The authors tell what it is, where they are used and why they are important.
Learning to generate chairs using convolutional neural networks.
Here we describe the process of training a generative convolutional neural network to generate images of objects by type and color. The network can interpolate rows of images and fill in the “empty spaces” with missing elements.
Generative-adversarial network in 50 lines of code
How to train the generative adversarial network (GAN)? You just need to take PyTorch and write 50 lines of code. Let's try it at leisure.
And last but not least
Which book is on the tables of many Neurodata Lab employees and can be considered one of my favorites?
Authors of the book: Amit Konar , Aruna Chakraborty
Emotion Recognition. A Pattern Analysis Approach
Excellent material, well-structured and supported by a wide range of sources and data. The book is suitable for everyone who is passionate about the problems of detection and recognition of emotions from a technical point of view, and those who are just looking for an exciting reading.
PS We understand that it is impossible to cover all available materials on this topic in the framework of one article. Therefore, if you are interested, then you can devote a fraction of your attention to collections on GitHub and other platforms. Here is some of them: