GPU to the rescue?

    When Intel added a graphics core to its processors last year, it was received with an approving understanding, but, as they say, without fanfare. In the end, if before that the memory controller moved to the CPU, so why not transfer the rest of the north bridge functionality there too? And the integrated graphics began to live and live everywhere, from Intel Atom to mobile Core i7. Who wanted - used it, who did not want - chose a computer with a discrete accelerator. In general, nothing, by and large, has changed. And the fact that the graphic component slowly helped the CPU in decoding the video is kind of normal, we are used to it.

    Meanwhile, business colleagues like the idea, and soon the graphics cores will also settle in their processors. But this is not limited to: plans to use the latter in the maximum range of tasks, and even the official name of the chips will change from CPU to APU. In general, according to AMD, the number of x86 cores cannot be increased indefinitely, because “bottlenecks” begin to appear, which reduce the efficiency of multiplying cores. But the GPU is almost nearby, but, due to its dissimilarity, doesn’t interfere with ordinary cores, and if it is tedious ...

    In general, look at this slide.

    image

    According to him, so-called multi-core processors step on the heels. “Heterogeneous nuclear,” and this is in strict accordance with Moore’s law. The latter have many advantages, but also disadvantages are very serious. Perhaps the most unpleasant - hybrid processors still fit very poorly into familiar programming models. And getting a tired programmer to switch oh how difficult it is. Nevertheless, efforts and funds have already been invested in supporting developers, the marketing apparatus is switched on, work is ongoing.
    Right now I don’t have a magic crystal ball at my fingertips, looking into which you can find out about Intel's far-reaching plans. But, if you turn on the logic and look at the architecture of future Sandy Bridge processors, we can assume that the graphic component from the processor is unlikely to disappear, and therefore it will become faster and faster from model to model. Actually, progress is already evident today - in Sandy Bridge Turbo Boost technology has penetrated into the integrated graphics solution, as a result of which its frequency from standard 650-850 megahertz can increase up to 1350 under load. Toys like this. Well, if so, it would be logical once to start using graphics power to speed up all sorts of calculations.

    About how graphic chips "help" today, I have already written more than once. Let's just say that not everything is smooth there (for example, you can see this post in ISN about the effectiveness of the GPU in video conversion). But let's assume that tomorrow integrated GPUs will remain not too smart in tasks that are not very typical for them, but they will learn accuracy without losing speed. And the question arises: are we not torturing ourselves with their inclusion in the struggle for a common cause even more than with multi-core processors?

    Indeed, nothing happens on the computer by itself, and the introduction of such beautiful technologies in presentations is impossible without the painstaking efforts of software developers. How are things today? Do you personally like the examples of successful use of the GPU in computing? Do developers want to promote the friendship of the CPU and GPU on an industrial scale? How effective will this bundle really be ? If effective, how quickly can you relearn to a new programming style ? And will it be a question of a certain standard approach, or will each professional seek (and find) his own way?

    There are more questions than answers, but this is not surprising: the CPU and GPU are just looking at each other, and are wondering what benefit can be derived from the differences in the crystal neighbor. But maybe you have already thought about this alliance, and can you make your prediction?

    Also popular now: