Industrial Neural Network Prediction System
More and more attention is paid to process optimization, mainly in the form of lower production costs. Cost reduction can be achieved by upgrading equipment, but this approach entails a lot of costs for design, purchase, reconstruction, etc., and is also accompanied by lost profits during the idle time of the reconstructed object. But it is also possible to use a mathematical approach to search for inefficiencies in the technological process, and this will be discussed later.
A neural network is a system of connected and interacting simple processors (neurons).
Figure 1. The structural diagram of the neural network (green - the input layer of neurons, blue - the hidden (intermediate) layer of neurons, yellow - the output layer of neurons).
A neuron is a basic element of a neural network, a single simple computing processor capable of perceiving, converting and distributing signals, in turn, combining a large number of neurons into one network allows us to solve fairly complex problems.
Figure 2. Scheme of the neuron.
The neural network approach is free from model constraints; it is equally suitable for linear and complex nonlinear problems, as well as classification problems. Learning a neural network primarily consists in changing the “strength” of connections between neurons. Neural networks are scalable, they are able to solve problems both within the framework of a single equipment, and at the scale of plants as a whole.
The goal is to predict the sulfur content in the product with the greatest possible accuracy, which in turn will allow you to keep the main technological parameters in optimal values both for the quality of the product and from the point of view of process optimization.
Units are ppm (one millionth of a share).
Input data - historical values of the technological parameters of the object.
Network forecast verification data - daily laboratory analysis of sulfur content.
A total of 531 observations were used, the total sample was divided as follows: 70% of the sample observations were used for network training, 30% were used as a control sample to assess the quality of network training and further compare networks among themselves. The average sulfur content in all observations was 316.7ppm.
In total, 4 networks were selected based on the training results, networks have the following configuration:
Network No. 1: 20-22-1
Network No. 2: 20-26-1
Network No. 3: 20-27-1
Network No. 4: 20-16-1
The network configuration is presented in the form AA-BB-C, where AA is the number of neurons in the input layer, BB is the number of neurons in the hidden layer, C is the number of neurons in the output layer.
The networks were trained in specialized packages, at the moment there are a great many of them (SPSS, Statistica, etc.), the histograms of the error distribution of trained networks over the entire set of observations are shown below:
Figure 3. The histogram of the error distribution for network No. 1.
Figure 4. A histogram of the error distribution for network No. 2.
Figure 5. The histogram of the error distribution for network number 3.
Figure 6. Histogram of error distribution for network No. 4.
From the obtained histograms, we can conclude that the network error obeys the normal distribution law, i.e. it is possible to divide the error size into 3 areas (for simplification, the distribution is considered normalized):
± σ1 (area 1 sigma - the error value in 68% of the forecasts is in this range);
± σ2 (region 2 sigma - the error in 95% percent of forecasts is in this range);
± σ3 (region 3 sigma - gross errors, misses, in less than 5% percent of cases, the magnitude of the error is greater than in the region ± σ2).
Errors in distribution areas:
Network No. and ± σ1 (68% of forecasts)
Network No. 1: ± 16.4ppm
Network No. 2: ± 18.3ppm
Network No. 3: ± 19ppm
Network No. 4: ± 18.6ppm
Network No. and ± σ2 (95% of forecasts)
Network No. 1: ± 43.9ppm
Network No. 2: ± 47.6ppm
Network No. 3: ± 42.8ppm
Network No. 4: ± 41ppm
The reason for gross errors (misses) in the area ± σ3 is the network data very different from those that were present in the training sample.
Also an important indicator of the quality of learning a neural network is the magnitude of the average absolute error.
Size of the average absolute error:
Network No. 1 - 14.4ppm
Network No. 2 - 13.4ppm
Network No. 3 - 14.3ppm
Network No. 4 - 13.6ppm
The graphs of the sulfur content in the product (laboratory analysis) and the magnitude of the absolute error are presented below:
Figure 7. Graph of sulfur content and absolute error for network No. 1.
Figure 8. Graph of sulfur content and absolute error for network No. 2.
Figure 9. Graph of sulfur content and absolute error for network No. 3.
Figure 10. Graph of sulfur content and absolute error for network No. 4.
To view the forecasts in real time, we used our own development in C #, the data was received from the OPC server, the application was initially developed with a minimum set of capabilities (graphics, XML import, export of the graph, adding an arbitrary parameter to the graph), in the future we plan to add history saving in the database, comparing network forecasts with real historical values for given time stamps, training networks already in its package, comparing networks among themselves and not only.
Figure 11. Screenshot of the first version
The network produces the smallest error when the sulfur content in the final product is in the range of 240-250ppm ÷ 400-410ppm (the sulfur content obtained as a result of laboratory analysis, not the network forecast), this is due to the fact that most measurements were made in this range, and, in fact, the network was trained on them. Neural networks have the ability to generalize information, i.e. they are able to give a forecast, including based on data with which the network has not worked up to this moment, using the laws of the training sample, but in this case, despite such a feature of the network, it should be remembered that the final result will be little predictable, but it can be stated with certainty that the error will increase.
In case of major changes at the facility, the network must be retrained.
Summarizing, I can say that the work experience turned out to be productive and extremely interesting. Although many comparisons and analyzes of the results have been made, it is too early to draw final conclusions. But as a result of testing neural networks at a real object, we can make an unambiguous conclusion that this approach has the right to life and has truly rich potential.
Briefly about neural networks
A neural network is a system of connected and interacting simple processors (neurons).
Figure 1. The structural diagram of the neural network (green - the input layer of neurons, blue - the hidden (intermediate) layer of neurons, yellow - the output layer of neurons).
A neuron is a basic element of a neural network, a single simple computing processor capable of perceiving, converting and distributing signals, in turn, combining a large number of neurons into one network allows us to solve fairly complex problems.
Figure 2. Scheme of the neuron.
The neural network approach is free from model constraints; it is equally suitable for linear and complex nonlinear problems, as well as classification problems. Learning a neural network primarily consists in changing the “strength” of connections between neurons. Neural networks are scalable, they are able to solve problems both within the framework of a single equipment, and at the scale of plants as a whole.
Briefing
The goal is to predict the sulfur content in the product with the greatest possible accuracy, which in turn will allow you to keep the main technological parameters in optimal values both for the quality of the product and from the point of view of process optimization.
Units are ppm (one millionth of a share).
Input data - historical values of the technological parameters of the object.
Network forecast verification data - daily laboratory analysis of sulfur content.
Network training and testing
A total of 531 observations were used, the total sample was divided as follows: 70% of the sample observations were used for network training, 30% were used as a control sample to assess the quality of network training and further compare networks among themselves. The average sulfur content in all observations was 316.7ppm.
In total, 4 networks were selected based on the training results, networks have the following configuration:
Network No. 1: 20-22-1
Network No. 2: 20-26-1
Network No. 3: 20-27-1
Network No. 4: 20-16-1
The network configuration is presented in the form AA-BB-C, where AA is the number of neurons in the input layer, BB is the number of neurons in the hidden layer, C is the number of neurons in the output layer.
The networks were trained in specialized packages, at the moment there are a great many of them (SPSS, Statistica, etc.), the histograms of the error distribution of trained networks over the entire set of observations are shown below:
Figure 3. The histogram of the error distribution for network No. 1.
Figure 4. A histogram of the error distribution for network No. 2.
Figure 5. The histogram of the error distribution for network number 3.
Figure 6. Histogram of error distribution for network No. 4.
From the obtained histograms, we can conclude that the network error obeys the normal distribution law, i.e. it is possible to divide the error size into 3 areas (for simplification, the distribution is considered normalized):
± σ1 (area 1 sigma - the error value in 68% of the forecasts is in this range);
± σ2 (region 2 sigma - the error in 95% percent of forecasts is in this range);
± σ3 (region 3 sigma - gross errors, misses, in less than 5% percent of cases, the magnitude of the error is greater than in the region ± σ2).
Errors in distribution areas:
Network No. and ± σ1 (68% of forecasts)
Network No. 1: ± 16.4ppm
Network No. 2: ± 18.3ppm
Network No. 3: ± 19ppm
Network No. 4: ± 18.6ppm
Network No. and ± σ2 (95% of forecasts)
Network No. 1: ± 43.9ppm
Network No. 2: ± 47.6ppm
Network No. 3: ± 42.8ppm
Network No. 4: ± 41ppm
The reason for gross errors (misses) in the area ± σ3 is the network data very different from those that were present in the training sample.
Also an important indicator of the quality of learning a neural network is the magnitude of the average absolute error.
Size of the average absolute error:
Network No. 1 - 14.4ppm
Network No. 2 - 13.4ppm
Network No. 3 - 14.3ppm
Network No. 4 - 13.6ppm
The graphs of the sulfur content in the product (laboratory analysis) and the magnitude of the absolute error are presented below:
Figure 7. Graph of sulfur content and absolute error for network No. 1.
Figure 8. Graph of sulfur content and absolute error for network No. 2.
Figure 9. Graph of sulfur content and absolute error for network No. 3.
Figure 10. Graph of sulfur content and absolute error for network No. 4.
Software implementation
To view the forecasts in real time, we used our own development in C #, the data was received from the OPC server, the application was initially developed with a minimum set of capabilities (graphics, XML import, export of the graph, adding an arbitrary parameter to the graph), in the future we plan to add history saving in the database, comparing network forecasts with real historical values for given time stamps, training networks already in its package, comparing networks among themselves and not only.
Figure 11. Screenshot of the first version
conclusions
Favorable working conditions for the network:
The network produces the smallest error when the sulfur content in the final product is in the range of 240-250ppm ÷ 400-410ppm (the sulfur content obtained as a result of laboratory analysis, not the network forecast), this is due to the fact that most measurements were made in this range, and, in fact, the network was trained on them. Neural networks have the ability to generalize information, i.e. they are able to give a forecast, including based on data with which the network has not worked up to this moment, using the laws of the training sample, but in this case, despite such a feature of the network, it should be remembered that the final result will be little predictable, but it can be stated with certainty that the error will increase.
In case of major changes at the facility, the network must be retrained.
Ways to improve:
- Accounting for the reaction time
Due to the fact that the reaction cycle (from the moment of measuring the characteristics of the raw material and its passage through the entire installation to the further point of measuring the characteristics of the final product) has a certain duration, then a higher correlation of the data requires an exact comparison of the parameters of the raw material with the product parameters, which increase forecast accuracy. - Noise filtering The
readings measured by the sensors, in addition to the useful component of the signal, also include noise. This noise is insignificant, but it distorts the process of network training and, accordingly, its subsequent forecasts as a result of training, this requires taking into account the noise component with the subsequent addition of filters in front of the neural network inputs. It is also possible to filter the output of the neural network for a smoother change in the forecast. The spectrum of filters today is quite extensive: from the simplest filters medians and exponents to wavelets. - Increasing the frequency of analyzes.
Increasing the number of measurements of sulfur content during the day, which will increase the amount of data for training the network and, in turn, will allow for a better network. - Improving the accuracy of laboratory analysis
If there is technical capability, increasing the accuracy of the analysis (by increasing the accuracy by an order of magnitude) will make the data more flexible for the network, as for the same sulfur value, there is a large scatter of independent parameters, which in turn leads to an increase in neural network error. - An increase in the number of input variables
I would like to note that in fact even a slight correlation of data with the target parameter is quite significant, therefore, you should use the maximum possible number of parameters on the object, as well as possibly use data from the object that precedes the current one in the process chain.
Summary
Summarizing, I can say that the work experience turned out to be productive and extremely interesting. Although many comparisons and analyzes of the results have been made, it is too early to draw final conclusions. But as a result of testing neural networks at a real object, we can make an unambiguous conclusion that this approach has the right to life and has truly rich potential.