
Supercomputer from a desktop PC: is it real?
Is it possible to turn a personal computer into a powerful solution for specialized needs - a supercomputer? Yes, such an opportunity appeared, however, this is not quite the path that analysts predicted at the end of the last century. Over the past five to six years, video cards have moved away from the main calculation modules - central processors significantly ahead. This led to the fact that they began to be used for previously unusual needs.
The idea of GPU computing is not new. Last year, ATI, and now AMD raised the idea of using graphics processors for specific calculations. Currently, in the role of computing devices, video cards are used to process colossal amounts of data obtained during geological surveys by the American oil company Hess. You can also name the project Folding @ Home, which is also an indicator of the use of GPU in calculations, this time scientific. This project involved the use of ATI chips, however, at the end of last year, Stanford researchers announced an interest in using the nVidia GeForce 8800 chips.
To advance the idea of using GPUs as specialized computing solutions, nVidia Corporation released the first public beta version of the CUDA development kit. This SDK allows you to simplify the development of software that can use the video card family of the GeForce 8800 for any calculations. Recall that the GeForce 8800 GTX contains 128 “stream processors”, which, according to nVidia, are able to provide peak theoretical performance in the region of 520 GFlops, using two video cards you can get almost 1 TFlops.
Even if you take into account the enormous distance between theory and practice, the real productivity will still be at a very high level. For corporate clients, this may be interesting, but the cost of creating infrastructure can stun any large multinational corporation. It will be more than one year before the GPU will be used by the corporate sector.
Source: www.thg.ru/technews/20070220_085307.html
The idea of GPU computing is not new. Last year, ATI, and now AMD raised the idea of using graphics processors for specific calculations. Currently, in the role of computing devices, video cards are used to process colossal amounts of data obtained during geological surveys by the American oil company Hess. You can also name the project Folding @ Home, which is also an indicator of the use of GPU in calculations, this time scientific. This project involved the use of ATI chips, however, at the end of last year, Stanford researchers announced an interest in using the nVidia GeForce 8800 chips.
To advance the idea of using GPUs as specialized computing solutions, nVidia Corporation released the first public beta version of the CUDA development kit. This SDK allows you to simplify the development of software that can use the video card family of the GeForce 8800 for any calculations. Recall that the GeForce 8800 GTX contains 128 “stream processors”, which, according to nVidia, are able to provide peak theoretical performance in the region of 520 GFlops, using two video cards you can get almost 1 TFlops.
Even if you take into account the enormous distance between theory and practice, the real productivity will still be at a very high level. For corporate clients, this may be interesting, but the cost of creating infrastructure can stun any large multinational corporation. It will be more than one year before the GPU will be used by the corporate sector.
Source: www.thg.ru/technews/20070220_085307.html