The computer of your dreams. Part 2: Reality and Fiction

    To be continued!
    Part 1


    Tasks are set, and it seems like it's time to go on reading various reviews and tests, but ...
    Where to start? Which way to look at all?
    image

    The second part of the opus will be devoted to the key components of the modern system - the central processor, RAM and video card. The fact that these components are basic, hardly anyone doubts, the question is different - which of them is more important? What should I pay attention to first of all when assembling a machine for certain tasks? What device and what are its characteristics?

    To begin with I will make a small digression.
    My previous article caused a rather mixed reaction on the Habr. The essence of most negative comments boiled down to one thing: “These are all obvious things! Nothing new! Everything is already known! Where is the specifics? ”
    What can I do, I can’t write briefly (yes, brevity is the sister of talent, but not mine), I cannot help but make various digressions. I can’t write instructions like “do it once, do two, do three”.
    And my previous topic is not an instruction, not a guide "how to assemble a computer." This is an introduction to one great piece of material, describing only general principles. And the material itself is also not an instruction or a guide, but food for reasons, an occasion to change some of your views and start thinking in situations where you considered it superfluous.
    I planned to publish the second part 2–3 days after the first and started writing almost immediately, but somehow it wasn’t sticking. A week later, about 60 percent was ready. I sat down, read the written, and I wildly did not like it. It turned out really too stretched and boring. Therefore, everything was rewritten anew, almost from scratch. I tried to make the material more readable and, as far as possible, reduce the length. I would be glad if my efforts were not in vain =)

    Part 2. Reality and Fiction


    Articles about computer hardware are just a huge amount. A lot of different resources, laboratories, and even the individual people who write them. Everyone has their own methodology, their own criteria, their own sets of test programs, and the hardware in the end is different everywhere. Well, if we find the tests of the device we are interested in in the right application. And if not? If the piece of iron is too new, or simply unpopular? But what if the field of application of interest is not respected by observers. It is getting bad. The only way out of this situation is to work out a large amount of material close to the issue of interest, and on its basis draw general conclusions and identify trends. Actually, my opus is the result of such research.
    During the writing of this article, a large number of reviews of the following resources were used:
    I consider them worthy and quality observers.
    One of the main principles for comparing results was:
    • The maximum possible equality of testing conditions;
    • Refusal to use the results of synthetic tests and benchmarks.
    On the basis of the materials studied, certain conclusions were drawn, in some places graphs were drawn for clarity.
    The picture below does not claim to be absolutely accurate, but it reflects quite well the current situation in the field of iron.

    CPU


    - You can not choose a heater - buy an AMD processor!
    To get started, throw all this crap out of your head. You can arrange as many Holivars on the topic of Intel vs AMD, but there is no benefit from this.
    - And why the Pentium and not Cor2Duo?
    What Pentium are you going to compare? Under this brand, there are already 6 generations of processors, each with several varieties and a large number of final models with different characteristics.
    Characteristics are what really matter. You can recognize them by the full (attention! Full) name of the ruler and processor model on the official website of the manufacturer or decent thematic resources.
    I will not list here all the characteristics of modern CPUs; there are too many of them. We are interested in the following:
    • The core and its stepping;
    • The number of computing cores;
    • Clock frequency;
    • Type and frequency of system bus;
    • Type and value of the system factor;
    • Characteristics of the cache memory (number of levels and volume on each of them);
    • Sets of instructions and extensions, support for various proprietary technologies;
    • The level of energy consumption and heat dissipation (heat pack, TDP).
    Now, in order, we will analyze them.

    Core

    We need to know the name of the core, if only because it determines whether the processor model belongs to a specific microarchitecture. And without information about this, comparing all the other characteristics can give results that are very far from reality.
    Now 5 microarchitectures are quite common:
    • AMD K8 (almost all Athlon 64 and Athlon 64 X2, with the exception of just a couple of models);
    • AMD K10 (Athlon 7xxx and Phenom of the first generation, as well as a pair of Athlon 64 X2 models);
    • AMD K10.5 (Athlon II and Phenom II, as well as the new Sempron - all models of this microarchitecture have a three-digit index);
    • Intel Core 2 (includes two generations of processors - 65nm Conroe and 45nm Penryn);
    • Intel Nehalem (Core i7, i5, i3, etc.).
    Various K7 with Netbursts and other antique products will not be considered.
    The graph clearly shows a comparison of the performance of modern microarchitectures in various tasks. The Intel Conroe results are taken as a reference value.
    image

    You can immediately see the hopeless obsolescence of the K8 and not the best picture with the K10. K10.5 shows more pleasant results, in some places even overtaking Intel products, but it upsets the complete drain of all AMD processors in tasks related to graphics. If we talk about Intel products - systematic growth in the transition to Penryn, and even more pleasant - to more modern Nehalem, which, apart from everything else, also distinguished themselves in video coding.
    In general, conclusions about the initial balance of power can be made fairly accurately.
    As for kernel stepping (revision), it mainly affects the TDP and the frequency potential of the processor. It happens that new steppings bring additional sets of instructions and extensions. Check on the manufacturer’s website =)

    Number of Cores

    The war between the supporters of 2-nuclear and 4-nuclear began even when the latter had just flashed in the announcements, and spears in disputes are still breaking. Over time, compromise solutions entered the “war” - 3-core CPUs (offered, however, only by AMD), which naturally raised new questions. How important is multicore? Or even not so - where exactly is multi-core important, in which applications?
    image

    The first thing that can be seen from the graph is clearly not in games. In general, there are of course several games where the performance increase is quite high (up to 50%), but there are very few of them - fewer than fingers on the hands (the most-most core-dependent - GTA IV and World in Conflict) =) In the majority, it is from 0 up to 15% and decreases with increasing quality and resolution settings.
    The most effective multi-core configurations show themselves in 3D rendering and compilation. Good results are also in video coding. When working with graphics, there is growth, but it is not as high as we would like.
    And finally, a strong blow to stereotypes is data compression. Archivers show impressive numbers in the built-in synthetic benchmarks, but when it comes to working with real data, it turns out that they have extra kernels like a dog’s fifth leg.
    The tri-core configuration is indeed very interesting - excellent results in applications optimized for multithreading and much greater benefit than from 4-cores in non-optimized ones. With a slight price difference with 2-core CPUs, it can be a good choice.
    As for the Hyper-Threading technology (virtual parallelization), revived in Intel Core i7 processors, it can only be used in applications that are very sensitive to the number of cores (those that have a sausage longer than 40 on the graph;)). Well, the inclusion of HT naturally does not give any mythical double growth - only 1/3 of what would have happened if we doubled the real number of nuclei. I will not give charts.

    Clock frequency

    A parameter that has historically been the "main measure of productivity," and indeed it is now.
    image

    Yes, nevertheless, the dependence of performance on the CPU frequency is much more noticeable than on the number of cores. For example, if you increase the frequency from 2.8 to 3.8 GHz, i.e. 35%, in most cases we will have a productivity increase of more than 30%, which is very good. With games, the picture is worse, the influence of the frequency is less than two times, and besides, as in the case of the number of cores, it decreases with increasing quality settings, resolution, and ... frequency, i.e. this dependence is nonlinear. If we delve into the tests, we can conclude that the most important part is from 2 to 3 GHz, and the games are very cold to ultra-high frequencies (above 4 GHz).
    Most modern processors have a factory frequency of just 2 to 3 GHz, i.e. buying a model with a higher frequency (at 300-500 MHz) will really give an advantage everywhere.

    System bus frequency and multiplier

    Manufacturers are very fond of innovating in the field of system buses, with might and main advertizing various QuickPaths with HyperTransports, which, in terms of bandwidth, put a "classic" FSB in the belt.
    If there are those who believed in these words, I recommend returning to the first schedule. Maybe the merits of high-performance tires are there, but this is definitely not the factor that has a strong influence on the result.
    No, at one time it was of course very important. But now even budget solutions with the support of the “morally obsolete” FSB work initially with an effective frequency of 800 MHz and have a throughput of 6.4 Gb / s. Here we are faced with the concept of "sufficiency."
    In general, the frequency of the system bus in terms of performance is most important as the fundamental speed of data exchange between the processor and RAM. To realize the potential of high-speed buses, it is important to use high-speed RAM (the converse is also true). So put this question aside to the tests of RAM.
    As for the peripheral devices - the younger Nehalem, for example, use a DMI bus with a throughput of 2 GB / s to communicate with them. And again - enough.
    The system factor determines the resulting core frequency. It is multiplied, respectively, by the frequency of the system bus, or by the frequency of the system generator, depending on the implementation. The multiplier can be completely locked (now rare), unlocked downward (found almost everywhere, since this function is one of the remaining energy-saving technologies) or completely unlocked (special overclocker models of processors).
    Hence the conclusion - in the general case, the type and frequency of the system bus, as well as the multiplier, can be ignored.

    Cache size

    The cache volume is one of the parameters most strongly affecting the positioning of the processor in the lineup. For example, the Core 2 Duo E8300 and Pentium Dual-Core E6300 are made on the same core - Wolfdale, and they differ primarily in the cache size - 6MB versus 2MB, similarly Phenom II X4 925 differs from Athlon II X4 630 only in the presence of 6MB cache -Memory 3 levels, etc. Great and the difference in prices between representatives of younger and older lines of processors. And what in practice?
    image

    As you can see, the effect of the cache volume is different everywhere and is highly dependent on a specific task. There is no need to talk about linearity, but the 4Mb-> 6Mb transition gives a very small increase almost everywhere, i.e. some conclusions about sufficiency are already being asked. And also, unlike previous tests, games show a strong love for the cache.
    Of all this variety of results, we are most interested in the one that is shown in black on the graph, i.e. 2MB-> 6MB. Why? The younger Intel Pentiums have 2MB of cache and the older Core 2 Duo 4-6MB. Among modern AMD products, 2 MB of level 2 cache has most processors in general, and Phenom II also has a 3 level cache of 4-6 MB (for the first Phenom, this indicator is more modest - 2 MB). Considering the above words about sufficiency, we can conclude that the "black sausage" very clearly shows the performance ratio between the younger and older processors. A deeper study of the tests confirms this conclusion.

    RAM


    The configuration of the RAM subsystem has not so many important characteristics for us:
    • Type of memory;
    • Real and effective frequency;
    • Number of channels;
    • Timing Scheme;
    • Volume.

    Memory type

    It is fundamental to the remaining characteristics of the memory, but in terms of performance is not important in itself.

    Frequency

    The memory frequency itself is not important. An important parameter is its bandwidth. The memory bandwidth is formed based on their product of the effective frequency and the width of the memory bus. The width of the memory bus in modern PCs is 64 bits, hence the DDR2-800 memory bandwidth will be equal to 6400Mb / s, which is indicated as one of the characteristics of the memory (PC2-6400, for example).
    The following graph shows the performance increase with increasing memory bandwidth from 6400 Mb / s to 12800 Mb / s, i.e. twice.
    image

    It’s completely frivolous - with an increase in PSP, the TWO growth in productivity in most cases does not exceed even 5-6%. One conclusion suggests itself - the sufficiency of the bandwidth of modern memory.
    Actually the same can put the final point on the issue of system bus bandwidth.

    Number of channels

    Multichannel memory modes are again designed to increase memory bandwidth.
    When you turn on dual-channel mode, you can achieve at best a 2-3% increase in productivity, and the highly praised three-channel controller of Intel Core i7 processors on the S1366 platform in the overwhelming majority of cases cannot even break away from the single-channel. The reason is the increased number of clock cycles spent on synchronizing such a circuit.

    Timing Scheme

    Memory timings are the time delays of the signal (expressed by the number of ticks) during memory access operations. There are quite a lot of timings, but the main scheme includes 4 of the most important ones - CAS Latency, RAS to CAS Delay, RAS Precharge Time, Row Active Time. The smaller the delay, the correspondingly higher the speed of access to memory, however, with an increase in frequency, in order to maintain stability, they have to be increased. Therefore, in practice, it is necessary to decide the question of which is more important - memory bandwidth or access speed.
    The impact of timings on performance is also very small. For systems with a memory controller integrated in the processor, you can win 2-5% while reducing timings by one step. For systems in which the memory controller is located in the chipset, in order to achieve such a result, it is necessary to lower them by two steps.
    If we consider the sensitivity of individual applications to timings - the layout is approximately the same as with the memory bandwidth.

    Volume

    One of the sore questions, which in principle is difficult to find a definite answer.
    In general, for most applications and games, 2 GB of memory is sufficient. Some games are very positive about its increase to 3-4GB. A further increase in the volume of meaning does not have any games or other ordinary tasks.
    When can a memory size really exceed the ordinary 3-4GB?
    1. Professional work with graphics. Storing the history of edits for heavy files can seriously eat RAM, but in practice this will only be reflected in a slightly faster operation of the same undo / redo functions.
    2. Several virtual machines. We omit the question of the need to keep several simultaneously running VMs on the same computer, but if there is such a need, memory will naturally be needed for this too.
    3. Web servers, terminal servers, etc. In a word - the server. In general, serious servers with a large number of simultaneously running applications may require 16 and 32GB, and even more, but this question is beyond the scope of this article =)
    That's all.
    Some of them may give improved multitasking as an argument for the need for a large amount of memory (the same argument is, by the way, ardent supporters of multi-core configurations). I’ll advise you to ask yourself the question - how often will you simultaneously play, for example, play kraisis, rip a fresh BD-disk, and also archive something along the way? It seems to me that it is very, very infrequent. Anyway, if you want to do something like this, then most likely the performance will rest on the capabilities of the disk subsystem and no gigabytes of memory will save here.
    As one of the options for using large volumes, disabling a swap or moving it to a virtual RAM disk. But this is also a double-edged sword
    By the way, to work with memory sizes over 4GB (in general, Windows is already limited to 3.25GB) you will need a 64-bit operating system (no, of course there is a crutch in the form of PAE, but it's still a crutch ...). And for more than 4GB to be able to address a single application, it must also be 64-bit, respectively.
    One more question remains - the configuration of the slats. The optimal volume can be gained in several ways: 1x2GB, 2x1GB, 2x2GB, 3x1GB, 1GB + 2GB ... In principle, any option is acceptable. Modern memory controllers are rather picky, it will be enough to have the same mode of operation in the SPD of all modules. But in general, the fewer the bars, the more preferable - the load on the controller and the power consumption of the system are reduced.

    Video card


    With a video card, the situation is completely different than with a central processor.
    In general, for starters, it would be nice to figure out what this device is actually for. Practice shows that even among the "Kindergarten" there are people who do not quite understand this.
    What is a video card for?
    • The video card as a graphic adapter is used to display the image on the screen;
    • A graphics card as a 3D accelerator is used to process three-dimensional graphics.
    Why do not need a video card:
    • The video card is NOT used as an accelerator in the process of processing 2D graphics;
    • The video card is NOT used as an accelerator during video playback.
    A general case is considered here, therefore, more knowledgeable users who have reached this point, please do not immediately run away in a comment with angry “but ...”, but have patience and read on (specifically, a paragraph about additional features).
    Before talking about video cards, it is important to clarify one point. Central processors are manufactured directly at the manufacturer's factory, marked there, and then shipped to suppliers. With video cards, everything is different.
    Of course, the manufacturer of video cards develops the reference design of the PCB, determines the reference characteristics, etc., but at the output of its factory they give only the graphics processor (it happens that there are also some auxiliary chips, but this does not change the essence). Assembling on their base of video cards is done by sub-vendors. And now they have full scope for activity. If the first wave of video cards is based on a reference design (the sub-vendor just does not have enough time to develop its PCB), then there is a scattering of exclusive solutions. And they start experimenting with frequencies right away ...
    Therefore, when comparing the performance of specific products, it is important to know how close their characteristics are to the reference ones.

    Video card as an output device


    If we consider the video card from this point of view, then the characteristics we are interested in are exclusively “consumer”:
    • Internal interface;
    • External interfaces;
    • Supported image output modes;
    • Supported OS
    • TDP

    Internal interface

    If we talk about the current situation - the choice here will be small - either a discrete board with a PCI Express interface, or a solution integrated into the chipset (Intel was perverted to the point that they began to embed video cores in the central processor - Core i3 and Pentium G, but the essence has not changed much - the use of this core is possible only with certain chipsets - Intel H55 / H57, which in general, IMHO, killed a good idea) using the same PCI Express. Video cards with the AGP interface are finally extinct, and occasionally found devices for the good old PCI belong to the category of “for perverts”. As for support for PCI Express 2.0, it is important if you are planning to build a system using several high-performance graphics cards - motherboards in this case, as a rule,

    Front end

    Here the choice depends solely on what you are going to display the image. Interfaces DVI and VGA (often through a DVI-VGA adapter) support all video cards without exception. Many have HDMI support (often again via a DVI-HDMI adapter). DisplayPort is rare, but you don’t really see anything on the technology =)
    By the way, you should not be upset if the selected device is completely devoid of HDMI support. Most reviews of attempts to use this interface for its intended purpose are replete with anger and abuse. Personal experience gave the same picture - a wonderful image through the "outdated" VGA and terribly crappy through the "new high-tech" HDMI. Well and besides this - absolutely redneck prices for the corresponding cables.

    Supported Image Output Modes

    In this matter, the multi-monitor capabilities of the video card are more important. Even with 3-4 external interfaces, the device can most likely only display 2 images at the same time, and the same HDMI is often paralleled with DVI, which therefore prevents them from being used together. So, if you want to connect 3 or more monitors to your computer, you will either have to look for a device with this function (the good manufacturer will not forget to mention this as one of the key features), or stock up on a second video card.
    As for the output modes themselves, even budget solutions have support for a wide range of these modes, up to 2560x1600. Although no one bothers to play it safe and clarify.

    Supported OS

    It all depends on the availability and quality of drivers for the desired axis. Especially painful at the time, this question was for adherents of penguin-like OSes, but now, it seems, everything was working out. If we talk about Windows, then the quality of AMD drivers is a frequent subject of holivars. It’s hard to say something objectively, but you can still throw a stone into the “red” garden: AMD releases driver updates strictly once a month, and this chart cannot be affected by any suddenly detected bugs, a UFO arrival, or other factors; NVIDIA does not suffer from such insanity - updates are released as needed, in addition there are always public beta versions. Draw your own conclusions.

    Graphics card as a 3D accelerator


    The capabilities of the video card as a 3D accelerator will determine the overall system performance in three-dimensional applications, primarily in games. It is on the video card, and not on the processor and memory, that you should focus when the gaming machine is going.
    In terms of performance, the following characteristics are important to us:
    • GPU and its revision;
    • The number of texture blocks and rasterization blocks;
    • Number and type of shader processors;
    • GPU frequencies (can be set separately for raster and shader domains);
    • Video memory characteristics (memory bus width, memory type, real and effective frequency, volume);
    • Support level for DirectX, OpenGL and Shader Model.

    GPU

    In general, it would be logical to assume that here, as in the case with the CPU, there will be an analysis of each of the characteristics with graphs, an analysis of their influence, etc. However, such a trick will not work with video cards. Their architectures differ too much, and the characteristics of the final solutions differ too much.
    It is important to know the characteristics of a video card first of all for comparison with a reference solution, and then for comparison with solutions of the same line.
    And here one bad moment wedges in. There is a practice of renaming solutions.
    For example, the following video cards:
    • GeForce 8800GTS / 512MB
    • GeForce 9800GTX
    • GeForce 9800GTX +
    • GeForce 250GTS
    They differ by and large only in the design of the PCB and the frequencies of the GPU (in general, of course, also the revision of the GPU, but this is secondary), i.e. in fact, they are one and the same. However, at first glance, these are three completely different series. Similarly, GF8600GT = GF9500GT, GF8800GT = GF9800GT, etc. AMD doesn’t use direct renaming, but in the past something similar happened with them (HD2600 and HD3650, for example). All this creates a rather tangible mess, only the name of the GPU will help to figure it out. Having found all the solutions on it, it is quite possible to find “clones” having the same (or almost the same) characteristics, but the old name.
    The revision of the GPU and its manufacturing process, as in the case of the CPU, mainly affect the TDP and the frequency potential.

    Number of specialized blocks

    Generally speaking, these parameters are directly related to performance, but, as mentioned above, they cannot be directly compared.
    The shader processors of modern AMD GPUs, by the way, are superscalar and execute 5 instructions per cycle, and the cunning manufacturer claims that he has 5 times more processors themselves =)

    Video Memory Features

    I don’t know why it happened, but it pulls people to measure the performance of video cards with the characteristics of video memory.
    Any budget junk like the GeForce 8400 or Radeon HD4300 diverges like hotcakes, because it is sold under the slogan "cool gaming video card per gig." It’s funny. And sad. And recently, another trend has been outlined. “Specialists” began to appear more and more, who read a couple of articles on the diagonal (people are IT professionals, or at least close to IT - that’s a shame) they conclude that the main parameter is the width of the video memory bus. And even with practical examples they try to prove. But only if they didn’t read the articles diagonally, but normally, they would know that the bus width itself is not of great importance, but it determines a really significant characteristic - memory bandwidth (which, unlike the situation with the OP , now the more - the better, and much better). And then
    Returning to the volume, most mid-range solutions have either 512 or 1GB of video memory on board. Tests show that gigabytes can be useful only for high-end graphics cards when working in high resolutions (1920x1080, for example), and not always. And even larger volumes, respectively - exclusively marketing "lure."
    Dual-chip video cards, by the way, do not have the volume indicated on the box, but "two times in half." Those. The Radeon HD4870X2 2GB, for example, has not 2GB of memory, but 1GB per processor. What's the difference? Very large - each of the GPs can only use its own memory. That's it =)

    API Support Level

    The level of support for the various APIs and the Shader Model does not determine anything other than the compatibility of the video card with them. And in general, do not chase after them. By the time that a sufficient number of games (and even more so when they stop braking ...) are getting normal (and not at the level of a pair of effects) support for the new APIs, more than one generation of video cards is being replaced.

    About gaming performance


    In general, by the way, it is important to mention how game performance is measured.
    If you go to any hardware forum, you can find a lot of shkolota people shouting that the "crisis at maximum speeds flies without brakes" on their budget devices of the previous generation, and sobsno ironworkers trying in vain to squeeze the same "no brakes" on top cards for $ 500.
    The reason here is the performance measurement technique. In shkoloty masses it is a subjective sensation, in addition to the IT people still have a subjective opinion and objective measure - FPS.
    FPS (Frames Per Second) - the number of frames that the game engine manages to render in one second.
    "Minimum game bar" is considered 25-30FPS.
    “Level of comfort” - twice as much, i.e. 50-60FPS.
    Where the figures are from, I will not paint. These are generally accepted values ​​and they have become them for a reason. If you want, look for yourself.
    As with any dynamic value, we can consider the maximum, minimum, and average FPS.
    Peak performance is of little interest to us, because it is very short-term and does not affect the overall picture, but the average and minimum is very even.
    Some game engines, however, suffer from a bad feature - to lag (twitch) with a seemingly decent FPS. Fortunately, this effect is not common.
    So, if the average FPS is at a level of 50-60 and above, at the peak of the load it does not fall below 25-30, and there are no different lags / twitches / freezes - only then can we objectively talk about the absence of brakes at the current resolution and quality settings.
    As for the quality settings - this is also a two-edged sword. Personally, I do not consider it permissible to call the settings “maximum” if full-screen anti-aliasing (FSAA) is disabled, for example. What to do, I love a beautiful clean picture and I do not like ladders. By the way, there are games where FSAA is absent, or is, but does not give an effect - these are, as a rule, the same things that lag with any FPS =)

    Additional features of modern graphics cards


    Dxva

    DirectX Video Acceleration is a hardware video acceleration technology. Allows you to almost completely shift the process of decoding video from the central processor to the GPU, respectively, significantly unloading it. It requires support from the GPU (GeForce 8 and older, except those on the G80 GPU, and all Radeon HD2000 and older) and software (the name hints that it will not work without an axis from small ones, and a more in-depth study of the question is what no Vista / 7 at all with their Enhanced Video Renderer). Not all codecs are supported, and indeed it’s capricious and doesn’t eat any file, but only “correctly encoded” (which, of course, is not at all ubiquitous).

    GPGPU

    General Purpose Graphics Processing Units - GPU technology for general-purpose computing. Those. roughly speaking, forcing the GPU to do the work of the central processor.
    Currently, there are 4 implementations of this approach - CUDA (NVIDIA GeForce 8 and later), Stream (AMD Radeon HD4000 and later), Compute Shader (one of the features of MS DirectX 11), OpenCL (an open standard designed to put things in order in this area) .
    We will consider CUDA as the most common at the moment.
    A list of existing solutions is available at http://www.nvidia.com/object/cuda_home.html
    An impressive part of this list is made up of programs for economic and scientific calculations. Among them, there are clients of distributed computing projects Folding @ home and SETI @ home, which have a specific audience of fans.
    From the point of view of an ordinary user, software for encoding and decoding video, which is also present in the assortment, is more interesting.
    The CoreAVC decoder, for example, performs virtually the same function as DXVA, but is not at all picky about the file parameters - it is enough that it is encoded with the desired codec (H264 / AVC).
    Among the encoding tools, the most notable are Badaboom, MediaCoder, TMPGEnc, Movavi Video Suite and two Cyberlink products. True, among those products that I personally managed to test (Badaboom and MediaCoder), none supported multipass coding, which is rather critical. But I didn’t try everything, and time does not stand still =)
    The performance increase in the above applications when CUDA is turned on is from one and a half to TEN (!!!) times. The CPU-GPU bundle naturally plays a large role, especially the latter, but even middle-class video cards show very interesting results.

    Teh end?


    But what about the motherboard, hard drives, power supply, etc.?
    Have patience. Articles are thus obtained exorbitant sizes. On the first part they swore that it was long, but this one turned out twice as long. All will be. Specifically about hard drives and generally data storage will be the 4th part.
    But the next, third part will be devoted to the continuation of the question that began here - about the performance of processors, RAM and video cards. It will only be considered on the other hand - the use of hidden potential;) Well, since the current topic, it turns out, is not fully disclosed, there will not be a general conclusion.

    Also popular now: