Exposing Intel Promotional Article

    For some time, while implementing various image processing algorithms, I could not help finding out about the Intel Integrated Performance Primitives (Intel IPP) package . This is a set of high-performance functions for processing one-, two- and three-dimensional data, using the capabilities of modern processors to the full. These are such bricks with universal interfaces from which you can build your applications and libraries. This product, of course, is commercial, since it is included in the supply of other development tools and is not separately distributed.

    Ever since I found out about this package, I had a desire to find out how quickly image resizing was implemented in it. There are no official benchmarks or performance data in the documentation, nor are there any third-party benchmarks. The closest I managed to find was the JPEG codec benchmarks from the libjpeg-turbo project.

    And so, the day before yesterday, in the process of preparing the article “ Image Resize Methods ” (the reading of which is very desirable for understanding the further discussion), I once again came across an article about which we will discuss :

    libNthumb, The NHN * Performance Primitive for Real-Time Creation of Thumbnail Image with Intel IPP Library

    The article is located on Google at the request of “intel ipp image resize benchmarks” on the first or second line and is posted on the Intel website, two Intel employees are among the authors.

    The article describes the load testing of a certain libNthumb library in comparison with the well-known ImageMagick. It is emphasized that libNthumb is based on Intel IPP and takes advantage of it. Both libraries read a 12-megapixel JPEG file with a resolution of 4000 × 3000, resize it in a resolution of 400 × 300, and then save it back to JPEG. The libNthumb library is tested in two modes:

    libNthumb- JPEG image is not opened in native resolution, but reduced by 8 times (i.e., in 500 × 375 resolution). JPEG format allows you to do this without fully decoding the original image. After that, the image is scaled to the required resolution of 400 × 300.

    libNthumbIppOnly - the image opens in its original resolution, after which it is scaled to the required resolution of 400 × 300 in one pass. This mode is intended, as stated in the article, to emphasize the difference in performance precisely from the use of Intel IPP, without additional technical tricks. Here is an explanatory diagram from the article itself:



    After that, the results of performance measurements for all three options (ImageMagick, libNthumb, libNthumbIppOnly) are presented according to the following parameters: decoding time, resize time, compression time and total operating time. I was interested in the results of the resize. Quote:

    The figure below shows average elapsed time for resizing process.



    Regardless of the number of worker threads, libNthumb shows about 400X performance gain over ImageMagick. It is because libNthumb has smaller data set through IDCT scale factor during decoding process.

    As you can see from the graph, libNthumb in both cases is 400 times faster than ImageMagick. The authors of the article explain this by the fact that libNthumb processes much less input data (64 times, to be precise) due to the reduction of the image by 8 times immediately upon opening. This does explain the result of libNthumb, but it does not explain the result of libNthumbIppOnly, which should use the same set of input data as ImageMagick , i.e. full size image. The authors are silent about the outstanding results of libNthumbIppOnly.

    It seems to me (in fact, I'm sure, but the narrative format forces me to write objectively) that in libNthumb (and therefore in Intel IPP) and ImageMagick completely different methods are used for image resizing. It seems to me that Intel IPP uses the affine transform method (here I send you again to read image resizing methods ), a method that works for a constant time relative to the size of the original image, a method completely unsuitable for reducing the size by more than 2 times, the method, giving a not very high-quality result when reduced to two times. While ImageMagick uses the convolution method (here I am absolutely sure), which gives a much better result, but also requires much more computation.

    If my assumptions are true, this could be easily understood from the resulting image: the result of libNthumbIppOnly should be very pixelated. And in the original article there really is a section comparing the quality of the resulting pictures. I will give it in full:

    The thumbnail image should have a certain level of quality. When it comes to the quality of thumbnail image from libNthumb, the quality difference from ImageMagick is invisible to the naked eye. The below pictures are thumbnail images generated by ImageMagick and libNthumb, respectively.

    Thumbnail Image by ImageMagick


    Thumbnail Image by libNthumb


    There are various methods of resizing. Image quality will differ depending on the filter used. libNthumb improves image quality through multi-level resizing and sharpening filters, each having a different look and feel.

    Miraculously, the result of libNthumbIppOnly is not here. Moreover, the above images are of different resolutions and both are smaller than the declared 400 × 300 pixels.

    As I already said, I believe that the method of affine transformations is completely unsuitable for arbitrary resizing of images, and if it is used somewhere, then this is an obvious bug. But this is only my opinion and I can not stop Intel from selling the library that implements this particular method. What I can interfere with is misleading other developers and giving out the difference in the operation of two fundamentally different algorithms as the difference in implementation. If the article claims that something works 400 times faster, you need to make sure that the result is the same.

    I have nothing against Intel and do not set a goal to somehow harm it. And from my point of view, she should do the following:

    1) Correct the article. Write why the performance of ImageMagick and IPP cannot be directly compared.
    2) Correct the documentation. Now it is possible to guess that linear, cubic, and interpolation by the Lanczos function using affine transformations that are not suitable for reducing by more than 2 times only by indirect signs: the exact algorithm of the filters is given in the documentation application .
    3) (very fantastic) Send an apology to the product users with an apology for misleading.

    Also popular now: