Deep Learning and OpenVINO Toolkit. Intel Expert Answers
We are completing the next iteration of Habr heading "Ask a Question to an Intel Expert" dedicated to Deep Learning and the Intel OpenVINO Toolkit . For reasons that weren’t very clear to us, the activity of Habra’s readers this time was much lower than usual, but Intel didn’t take enthusiastic people - the questions that were missing for completeness were collected offline. And yes, we also have the winner of the contest for the best question - about him, by tradition, at the end of the post.
What benefits does the OpenVINO user give and why should they be used?
Yuri Gorbachev. The main advantage of the product is performance, minimum size and almost zero number of dependencies. It is these requirements that we single out as the main ones in product development.
The product is ideal for implementing applications that use Deep Learning and Computer Vision to solve problems. For example, the performance of our product when computing networks on Intel platforms is several times higher compared to popular frameworks. We also have significantly lower memory requirements, which is important for a number of applications. Trite, on some platforms it is impossible to start the network using frameworks due to lack of memory.
Is it possible to train networks using OpenVINO on Intel platforms? Say, very interesting, is it possible to train on Nervana platforms?
Yuri Gorbachev. No, workout support is not included in the product. A decent part of the improvements we get just due to the fact that we do not expect that the product will be used in training (for example, the reuse of memory, fusing layers, etc.).
Why it is impossible to simply use frameworks in which the network was trained to launch neural networks? What does the Deep Learning Inference Engine compare to frameworks? Why do I need to change something?
Yuri Gorbachev. Use, of course, possible. But if you need the best performance - the choice in favor of OpenVINO is easy enough to explain.
The performance of the Deep Learning Inference Engine is currently the best on Intel platforms. We publish the results in comparison with popular frameworks, probably, there is no point in repeating them here. I can only say that even versions of frameworks that use Intel libraries lose performance in our product.
In addition, the Deep Learning Inference Engine is the only product that supports launching networks of popular frameworks on all Intel platforms and running different operating systems. I will give the following scenarios as examples:
- Networking on FPGA is possible only through OpenVINO.
- Launch Caffe / TensorFlow / MXNet networks on Intel GPU & Movidius.
- Full-fledged launch on networks on Intel platforms running Windows OS
In general, running on Windows is a separate story. Not all frameworks support this “out of the box” launch, for example, caffe launch is not very simple. TensorFlow supplies binaries for Windows, but if you need to make modifications and rebuild - there may be a problem. At the same time, we see that running on Windows is often in demand.
What is the format of the intermediate presentation of the network architecture? Does OpenVINO support NNEF?
Yuri Gorbachev. Currently the ONNX standard is the most popular. Rather, it is a consequence of the support from Facebook, Microsoft and other players. In particular, WinML accepts ONNX input for execution and provides good tools for working with this format. I am skeptical about the standardization of such things in general. Unfortunately, practice shows that as soon as a conversation moves into the plane of standardization committees, in which representatives of different companies selling their products sit, progress is severely inhibited. Already, there is a problem that ONNX is not sufficient to express a set of existing networks. For example, Mask-RCNN, developed by Facebook itself, is not supported on ONNX. As well as the SSD and Faster-RCNN networks.
We do not consider support for NNEF - there were no requests from customers and, objectively speaking, this standard is not often used. In practice, I have seen this standard in use only once. By the way, that company has a contract with the Khronos Group.
What are the tools for analyzing the performance of an Intel GPU while simultaneously attacking multiple networks?
Yuri Gorbachev. I think the most suitable product will be Intel vTune. We use it in development ourselves, a lot of useful information is shown and, if this product is mastered at least at a basic level, this is a significant help. By the way, the Deep Learning Inference Engine supports the ability to implement layers yourself. And in the course of this implementation, it will probably still be necessary to use the profiler to achieve the best performance.
Question of Hanry396
The researchers identified a unique “respiratory imprint” for 17 different diseases, such as kidney cancer or Parkinson's disease, and developed a device that displays breathing patterns with an accuracy of 86%, using an array of nanoscale sensors and analyzing the results using artificial intelligence methods. And in this regard, the question arose: “How do you think, to what extent can AI develop in medicine and will it be possible to combine the human brain with a computer with it?”
Yuri Gorbachev. The development of AI in medicine is already happening now, mostly at a fairly basic level, but the steps are clearly visible. Approaches to segmentation of MRI images using networks are becoming popular, our customers are already analyzing the most productive platforms for such tasks - evidence that products are being prepared for release. It seems to me important that often the use of networks pursues not only the goals of speeding up, but also improving the quality of diagnostics.
It’s scary to think about the synergy of a computer with a human brain. At least, it looks like the current methods of solving AI problems are quite clumsy compared to the human brain.
I tried to integrate openVINO and ROS, I did not succeed, the question is how to integrate OpenVINO into ROS correctly?
Yuri Gorbachev. To answer the question is somewhat difficult, it is not clear what did not work. Linking a specific ROS node with OpenVINO can be the most basic way. We ourselves used this method, it worked.
I recommend asking a more detailed question on our forum , they will help you there, our team answers questions there with the product support team.
In the processing of biomedical images, five-dimensional input data, three-dimensional convolutions, and other operations are often used. Are there any plans to support them? Is there / is it planned to support recursive networks, networks (or individual layers) with common sets of parameters?
Yuri Gorbachev. Yes, we plan and implement support for 3D convolutions and pooling. I think we can expect the release of the product version with support by the end of the year. There will also be support for recurrent networks (LSTM / RNN).
Why is OpenCV library in binary form included in OpenVINO? After all, anyone can download and build it yourself.
Yuri Gorbachev. The reasons are quite commonplace. OpenCV is available in source form and building it is, in essence, a simple task. Somewhat more difficult to build the most efficient package. Often this raises questions and we decided to just provide the finished package. We do not use any special magic, just build correctly and with the right options.
Of course, it is not necessary to use it, but I would recommend to compare the performance of the application with it and with the custom build. In some cases, customers were able to speed up their applications simply because they switched to our distribution.
Also, in the case of delivery of OpenVINO, the OpenCV DNN module uses the Inference Engine as an engine for launching networks. On average, this gives an acceleration of at least 2 times compared with the code in OpenCV.
The structure of OpenVINO includes trained models. How do they differ from the models that are available on the net? Can they be used in applications and are there any restrictions on commercial use?
Yuri Gorbachev. Indeed, as part of OpenVINO there are models, the use of which does not impose absolutely no restrictions (except for attempts to reconstruct the original model from the format of an intermediate presentation) and does not require the signing of a license agreement.
There are two differences from public models:
- Performance and size of models . All supplied models solve a narrow problem (for example, the detection of pedestrians) and this allows us to significantly reduce their size. In the case of public models, an attempt is made to solve a more general problem (detection of several classes of objects) and this requires much more computationally complex models with a large number of parameters. In the example above (pedestrian detection), our model can solve a problem 10+ times faster than a public one with a quality that is not the worst.
- Solving exotic problems . It often happens that the task does not cause much interest among the academic community and it is not easy to find a public model. For example, the detection of the angles of rotation of the head or analysis of age and sex. In this case, our model frees you from the need to find a dataset and train the model.
It looks like several models do the same thing, for example, face detection. Why are there so many of them?
Yuri Gorbachev. There are two differences:
- The ratio of speed \ quality . Some models can work much faster due to a slight loss in quality. Depending on the requirements of the application, you can choose one or the other.
- Different conditions of the problem statement . For example, the angle of shooting a person can affect the quality of detection and we provide 2 models for different cases.
It should be noted that for models there is a file with a description, there you can see performance figures, model accuracy and examples of images that are expected at the input, i.e. description of the script.
Why are recurrent networks not supported? Do you plan to support specific primitives and topologies using these primitives?
Yuri Gorbachev. Support is realized, it is a trivial matter of time and priorities. I think by the end of the year we should implement this functionality and a number of innovations.
What if I try to import the model and I get errors. Is it possible to solve this on your own or do you have to go to support? Will they support me at all?
Yuri Gorbachev. At the moment, you may well turn to the support forum, we answer a fairly large number of questions and solve problems. It should be noted that the Model Optimizer component is a set of python scripts. In principle, this means that you can see and fix something yourself, if there is interest in it.
We are also planning to release source code — this should allow us to do more complex things than fixing bugs.
Well, the winner of the competition is Habrayuzer S_o_T with a question regarding the support of NNEF. Congratulations!