Intel - a success story

    Understanding Intel and its three founders is possible only when you understand Silicon Valley and its origins. And to do this, you need to penetrate the history of the company Shokley Transistor , the Treacherous Eight and Fairchild Semiconductor . Without their understanding, Intel will remain the same to you as for most people - a secret.

    The invention of computers did not mean that the revolution began immediately. The first computers, based on large, expensive, fast-breaking electronic tubes, were expensive monsters that only corporations, research universities, and the military could contain. The emergence of transistors, and then new technologies that allow to erase millions of transistors on a tiny microchip, meant that the computing power of many thousands of ENIAC devices can be concentrated in the head of the rocket, in a computer that can be kept on your lap, and in portable devices.

    In 1947, Bell Laboratory engineers John Bardeen and Walter Brattein invented the transistor, which was introduced to the general public in 1948. A few months later, William Shockley, a Bell employee, developed the bipolar transistor model. The transistor, which, in essence, is a solid-state electronic switch, has replaced a bulky vacuum tube. The transition from vacuum tubes to transistors initiated a miniaturization trend that continues today. The transistor has become one of the most important discoveries of the XX century.

    In 1956, Nobel laureate in physics, William Shockley, created the company Shockley Semiconductor Laboratory to work on four-layer diodes. Shockley failed to attract his former employees from Bell Labs; instead, he hired a group, in his opinion, of the best young electronics specialists who recently graduated from American universities. In September 1957, due to a conflict with Shockley, who decided to stop research on silicon semiconductors, eight key employees at Shokley Transistor decided to leave their jobs and start doing their work. Eight people are now forever known as the Treacherous Eight. This epithet gave them Shockley when they left work. The eight included Robert Noyce, Gordon Moore, Jay Last, Gene Hourney, Victor Greenich, Eugene Kleiner, Sheldon Roberts and Julius Blanc.

    image

    After leaving, they decided to create their own company, but the investment was nowhere to take. As a result of calling 30 companies, they came across Fairchild, the owner of Fairchild Camera and Instrument. He gladly invested one and a half million dollars in a new company, which was almost twice as much as eight of its founders had originally thought necessary. A so-called premium deal was concluded: if the company proves successful, it can redeem it in full for three million. Fairchild Camera and Instrument took advantage of this right already in 1958. Called a subsidiary of Fairchild Semiconductor.

    In January 1959, one of the eight founders of Fairchild, Robert Noyce, invented a silicon integrated circuit. At the same time, Jack Kilby at Texas Instruments invented the germanium integrated circuit six months earlier - in the summer of 1958, however, the Neuss model turned out to be more suitable for mass production, and it was used in modern chips. In 1959, Kilby and Neuss independently applied for patents for an integrated circuit, and both successfully obtained them, and Noyce received his patent first.

    In the 1960s, Fairchild became one of the leading manufacturers of operational amplifiers and other analog integrated circuits. However, at the same time, the new management of Fairchild Camera and Instrument began to restrict the freedom of action of Fairchild Semiconductor, which led to conflicts. One by one, members of the G8 and other experienced employees began to quit and establish their own companies in Silicon Valley.

    image

    Intel was founded on July 18, 1968 by Robert Noys, Gordon Moore and Andrew Grove.

    image

    The first name chosen by Noyce and Moore was NM Electronics, N and M are the first letters of their last names. But it was not very impressive. After a large number of not very successful proposals, for example, Electronic Solid State Computer Technology Corporation, came to the final decision: the company will be called Integrated Electronics Corporation. In itself, it was also not too impressive, but it had one virtue. Abbreviated company could be called Intel. It sounded good. The name was energetic and eloquent.

    image

    Scientists set themselves a definite goal: to create a practical and affordable semiconductor memory. Nothing like this has ever been created, given the fact that the memory device on silicon chips cost at least a hundred times more than the usual for that time memory on magnetic cores. The cost of semiconductor memory reached one dollar per bit, while the magnetic core memory cost only about a cent per bit. Robert Noys said: “We only needed to do one thing - reduce the cost by a hundred times and thereby conquer the market. This is what we mostly did. ”

    In 1970, Intel released a 1 Kbit memory chip, far exceeding the capacity of the existing chips (1 Kbit is 1024 bits, one byte consists of 8 bits, that is, the chip could store only 128 bytes of information, which by modern standards is negligible. ) The created microcircuit, known as Dynamic Random Access Memory (DRAM) 1103, became by the end of next year the best-selling semiconductor device in the world. By this time, Intel had grown from a handful of enthusiasts to a company with more than a hundred employees.

    At this time, the Japanese company Busicom turned to Intel with a request to develop a chipset for a family of high-performance programmable calculators. The initial design of the calculator provided for at least 12 different types of chips. Intel engineer Ted Hoff rejected this concept and instead developed a single-chip logical device that receives application commands from semiconductor memory. This central processor worked under control of the program which allowed to adapt functions of a microcircuit for performance of incoming tasks. The microcircuit was universal in nature, that is, its use was not limited to a calculator. Logical modules had only one purpose and a strictly defined set of commands that were used to control its functions.

    One problem was connected with this microcircuit: all rights to it belonged exclusively to Busicom. Ted Hoff and other developers understood that this design has almost unlimited application. They insisted that Intel redeem the rights to the created chip. Intel offered Busicom to return 60 thousand dollars paid by it for a license in exchange for the right to dispose of the developed chip. As a result, Busicom, being in a difficult financial situation, agreed.

    On November 15, 1971, the first 4-bit microcomputer set 4004 appeared (the term microprocessor appeared much later). The microcircuit contained in itself 2300 transistors, cost 200 dollars and was comparable in its parameters to the first ENIAC computer, created in 1946, using 18 thousand vacuum tubes and occupying 85 cubic meters.

    image

    The microprocessor carried out 60 thousand operations per second, worked at a frequency of 108 kHz and was produced using 10 micron technology (10,000 nanometers). Data was transmitted in blocks of 4 bits per clock, and the maximum addressable amount of memory was 640 bytes. The 4004th was used to control traffic lights, in the analysis of blood and even in the Pioneer 10 research rocket launched by NASA.

    In April 1972, Intel released the 8008 processor, which ran at 200 kHz.

    image

    It contained 3,500 transistors and was produced using the same 10-micron technology. The data bus was 8-bit, which allowed to address 16 KB of memory. This processor was intended for use in terminals and programmable calculators.

    The next processor model, the 8080, was announced in April 1974.

    image

    This processor already contained 6000 transistors and could address 64 KB of memory. The first Altair 8800 personal computer (not a PC) was assembled on it. The computer used was CP / M operating system, and Microsoft developed an interpreter for the BASIC programming language for it. It was the first mass model of a computer for which thousands of programs were written.

    Over time, the 8080 became so famous that it began to be copied.

    At the end of 1975, several former Intel engineers who were developing the 8080 processor created Zilog. In July 1976, this company released the Z-80 processor, which was a significantly improved version of the 8080.

    image

    This processor was incompatible with the 8080 on the pin, but combined many different functions, such as a memory interface and a RAM upgrade circuit, which made it possible develop cheaper and simpler computers. The Z-80 also included an extended 8080 processor command set, allowing its software to be used. This processor includes new commands and internal registers, so software developed for the Z-80 could be used with almost all versions of 8080.

    Initially, the Z-80 processor operated at a frequency of 2.5 MHz (later versions were already operating at 10 MHz), contained 8500 transistors and could address 64 KB of memory.

    Radio Schack chose the Z-80 processor for its TRS-80 Model 1 personal computer. Soon, the Z-80 became the standard processor for systems running the CP / M operating system and the most common software of the time.

    Intel did not stop there, and in March 1976, it released the 8085 processor, which contained 6500 transistors, ran at 5 MHz, and was manufactured using 3-micron technology (3000 nanometers).

    image

    Despite the fact that it was released a few months earlier than the Z-80, it never managed to achieve the popularity of the latter. It was used mainly as a control chip for various computerized devices.

    In the same year, MOS Technologies released the 6502 processor, which was completely different from the Intel processors.

    image

    It was developed by a group of engineers from Motorola. The same group worked on the creation of a 6800 processor, which in the future transformed into a 68000 processor family. The price of the first version of the 8080 processor reached three hundred dollars, while the 8-bit 6502 cost only about twenty-five dollars. This price was quite acceptable for Steve Wozniak, and he built the 6502 processor into the new Apple I and Apple II models. The 6502 processor was also used in systems developed by Commodore and other manufacturers.

    This processor and its successors successfully worked in gaming computer systems, which included the prefix Nintendo Entertainment System. Motorola continued to work on the creation of a series of processors 68000, which were later used in computers Apple Macintosh. The second generation of Mac computers used the PowerPC processor, which is the successor to the 68000. Today, Mac computers have gone over to the PC architecture and are using the same processors, system logic chips and other components.

    In June 1978, Intel introduced the 8086 processor, which contained a set of commands, code-named x86.

    image

    The same instruction set is still supported in all modern microprocessors: AMD Ryzen Threadripper 1950X and Intel Core i9-7920X. The processor 8086 was completely 16-bit - internal registers and data bus. It contained 29,000 transistors and worked at a frequency of 5 MHz. Thanks to the 20-bit address bus, it could address 1 MB of memory. When creating the 8086th, backward compatibility with the 8080th was not provided. But at the same time, a significant similarity of their commands and language allowed the use of earlier versions of the software. This property subsequently played an important role for the rapid transfer of CP / M (8080) system programs to PC rails.

    Despite the high efficiency of the 8086 processor, its price was still too high by the standards of that time and, more importantly, its operation required an expensive 16-bit data bus support chip. To reduce the cost of the processor, in 1979, Intel released the 8088 processor - a simplified version of the 8086.

    The 8088th used the same internal core and 16-bit registers as 8086 could address 1 MB of memory, but, unlike the previous version, used an external 8-bit data bus. This made it possible to provide backward compatibility with the previously developed 8-bit 8085 processor and thereby significantly reduce the cost of motherboards and computers being created. That is why IBM chose a cut-down 8088 processor instead of an 8086 for its first PC. This decision had far-reaching implications for the entire computer industry.

    The 8088 processor was fully software-compatible with the 8086, which allowed the use of 16-bit software. The 8085 and 8080 processors used a very similar instruction set, so programs written for previous processors could be easily converted to the 8088 processor. This, in turn, allowed developing various programs for the IBM PC, which was the key to its future success. Not wanting to stop halfway, Intel was forced to provide backward compatibility support for the 8086/8088 with most processors released at the time.

    Intel immediately began to develop a new microprocessor after the release of 8086/8088. The 8086 and 8088 processors required a large number of support chips, and the company decides to develop a microprocessor that already contains all the necessary modules on the chip. The new processor included many of the components previously released as separate chips, which would dramatically reduce the number of chips in the computer, and, consequently, reduce its cost. In addition, the system of internal teams was expanded.

    In the second half of 1982, Intel launches the 80186 embedded processor, which, in addition to the improved 8086 core, also contained additional modules replacing some support chips.

    image

    Also in 1982, 80188 was released, which is a version of the 80186 microprocessor with an 8-bit external data bus.

    Released on February 1, 1982, the 16-bit x86-compatible microprocessor 80286 was an improved version of the 8086 processor and had 3-6 times more performance.

    image

    This qualitatively new microprocessor was then used in the landmark IBM PC-AT computer.

    The 286th was developed in parallel with the 80186/80188 processors, but it lacked some of the modules available on the Intel 80186 processor. The Intel 80286 processor was manufactured in exactly the same package as the Intel 80186-LCC, as well as in PGA packages with sixty-eight conclusions.

    In those years, the backward compatibility of processors was still maintained, which did not prevent the introduction of various innovations and additional features. One of the main changes was the transition from the 16-bit internal architecture of the processor 286 and earlier versions to the 32-bit internal architecture of the 386th and subsequent processors belonging to category IA-32. This architecture was introduced in 1985, but it took another 10 years for operating systems such as Windows 95 (partly 32-bit) and Windows NT (requiring only 32-bit drivers) to appear on the market. And only 10 years later, the Windows XP operating system appeared, which was 32-bit both at the driver level and at the level of all components. So, it took 16 years to adapt 32-bit computing.

    80386th appeared in 1985. It contained 275 thousand transistors and performed more than 5 million operations per second.

    image

    Compaq's DESKPRO 386 computer was the first PC based on a new microprocessor.

    The next of the x86 processor family was the 486th, which appeared in 1989.

    image

    It already contained 1.2 million transistors and the first built-in coprocessor, and also worked 50 times faster than the 4004 processor; its performance was equivalent to that of powerful mainframes.

    Meanwhile, the US Department of Defense did not enjoy the prospect of remaining with a single chip supplier. As the last ones got smaller (remember, which zoo was observed in the early nineties), the importance of AMD, as an alternative manufacturer, grew. Under the agreement of 1982, AMD had all the licenses for the production of the 8086, 80186 and 80286 processors; however, the newly developed 80386 processor from Intel refused to transfer AMD categorically. And the agreement broke off. Then followed a long and loud lawsuit - the first in the history of companies. It ended only in 1991 with the victory of AMD. For its position, Intel paid the plaintiff a billion dollars.

    But still the relationship was spoiled, and the former confidentiality was not discussed. Moreover, AMD has taken the path of reverse engineering. The company continued to produce different hardware, but completely coincident in microcode Am386 processors, and then Am486. Intel has already gone to court. Again, the process was delayed for a long time, and success turned out to be on one side then the other. But on December 30, 1994, a court decision was made, according to which the Intel microcode is still owned by Intel, and somehow it is not good for other companies to use it if the owner does not like it. Therefore, since 1995, everything has changed seriously. On Intel Pentium and AMD K5 processors, any applications for the x86 platform were launched, but in terms of architecture, they were fundamentally different. And it turns out

    However, to ensure compatibility, cross-pollination by technology has not gone anywhere. In modern Intel processors, there are quite a few patented AMD, and, on the contrary, AMD neatly adds instruction sets developed by Intel.

    In 1993, Intel introduced the first Pentium processor, whose performance increased fivefold compared to the performance of the 486 family. This processor contained 3.1 million transistors and performed up to 90 million operations per second, which is approximately 1,500 times higher than the speed of 4004.

    image

    When the next generation of processors appeared, those who relied on the Sexium name were disappointed.

    The P6 processor family, called the Pentium Pro, was born in 1995.

    image

    It contained 5.5 million transistors and was the first processor, the cache of the second level of which was placed directly on the chip, which made it possible to significantly increase its speed. The processor contained 16 KB L1 cache and 256 KB L2. A large amount of cache memory was partly compensated by the lack of MMX commands.

    Revising the P6 architecture, Intel introduced the Pentium II processor in May 1997.

    image

    It contained 7.5 million transistors, packed, in contrast to the traditional processor, into a cartridge, which made it possible to place the L2 cache directly in the processor module. This helped significantly improve its speed. In April 1998, the Pentium II family was expanded with a low-cost Celeron processor used in home PCs and a professional Pentium II Xeon processor designed for servers and workstations. Also in 1998, Intel for the first time integrated second-level cache memory (which worked at full processor core frequency) directly into the chip, which made it possible to significantly increase its speed.

    While the Pentium processor was rapidly gaining market dominance, AMD acquired NexGen, which worked on the Nx686 processor. As a result of the merger, an AMD K6 processor appeared.

    image

    This processor, both in hardware and in software, was compatible with the Pentium processor, that is, it was installed in the socket 7 and executed the same programs. AMD continued to develop faster versions of the K6 processor and conquered much of the middle-class PC market.

    The first processor for desktop computers of the older model, which contains the embedded cache memory of the second level and working with the full core frequency, was the Pentium III processor, based on the Coppermine core, introduced in late 1999, which was, in fact, the Pentium II, containing SSE instructions.

    In 1998, AMD introduced the Athlon processor, which allowed it to compete with Intel in the high-speed desktop PC market on an almost equal basis.
    image
    This processor was very successful, and Intel got it in the face of a worthy opponent in the field of high-performance systems. Today, the success of the Athlon processor is beyond doubt, but there were concerns when it entered the market. The fact is that, unlike its predecessor K6, which was compatible on both software and hardware levels with an Intel processor, Athlon was compatible only at the software level - it required a specific chipset of system logic and a special socket.

    New AMD processors were released on 250-nm technology with 22 million transistors. They had a new unit of integer calculations (ALU). The EV6 system bus provided data transmission on both clock fronts, which made it possible to obtain an effective frequency of 200 megahertz at a physical frequency of 100 megahertz. The size of the cache memory of the first level was 128 Kb (64 Kb of instructions and 64 Kb of data). The second level cache reached 512 Kb.

    The year 2000 was marked by the appearance on the market of new developments of both companies. March 6, 2000, AMD released the world's first processor with a clock frequency of 1 GHz. It was a representative of the increasingly popular Athlon family at the Orion core. AMD also introduced the Athlon Thunderbird and Duron processors for the first time. The Duron processor, in essence, was identical to the Athlon processor and differed from it only by the smaller L2 cache. Thunderbird, in turn, used the integrated cache memory, which increased its speed. Duron was a cheaper version of the Athlon processor, which was designed primarily to compete with low-cost Celeron processors. And at the end of the year, Intel introduced a new Pentium 4 processor.

    In 2001, Intel released a new version of the Pentium 4 processor with an operating frequency of 2 GHz, which became the first processor to achieve this frequency. In addition, AMD introduced the Athlon XP processor, based on the Palomino core, as well as the Athlon MP, designed specifically for multiprocessor server systems. During 2001, AMD and Intel continued to work on improving the performance of the chips being developed and improving the parameters of existing processors.

    In 2002, Intel introduced the Pentium 4 processor, which for the first time reached a working frequency of 3.06 GHz. Subsequent processors will also support Hyper-Threading technology. Simultaneous execution of two threads gives for processors with Hyper-Threading technology a performance gain of 25-40% compared to conventional Pentium 4 processors. This inspired programmers to develop multi-threaded programs, and set the stage for the emergence of multi-core processors in the near future.

    In 2003, AMD launched the first 64-bit Athlon 64 processor (codenamed ClawHammer, or K8).

    image

    Unlike the Itanium and Itanium 2 server 64-bit processors, optimized for the new 64-bit software architecture and rather slow with traditional 32-bit programs, the Athlon 64 embodies the 64-bit x86 extension. After some time, Intel introduced its own set of 64-bit extensions, which is called EM64T or IA-32e. Intel extensions were almost identical to AMD extensions, which meant they were software compatible. Until now, some operating systems call them AMD64, although competitors prefer their own brands in marketing documents.

    In the same year, Intel released the first processor, which implemented a third-level cache memory - Pentium 4 Extreme Edition. It was built 2 MB cache, significantly increased the number of transistors and as a result - performance. The Pentium M chip for laptop computers also appeared. It was conceived as an integral part of the new Centrino architecture, which should, firstly, reduce energy consumption, thereby increasing the battery life, and secondly, make it possible to produce more compact and lightweight cases.

    In order for 64-bit computing to become a reality, you need 64-bit operating systems and drivers. In April 2005, Microsoft began distributing a trial version of Windows XP Professional x64 Edition, which supports additional instructions AMD64 and EM64T.

    Without slowing down, AMD launched the world's first dual-core x86-based Athlon 64 X2 processors in 2004.

    image

    At that time, very few applications were able to use two cores at the same time, but in specialized software, the performance gain was quite impressive.

    In November 2004, Intel was forced to cancel the release of the Pentium 4 model with a clock frequency of 4 GHz due to problems with the heat sink.

    On May 25, 2005, Intel Pentium D processors were demonstrated for the first time. There's nothing special to say about them, except that only about 130 W of heat dissipation.

    In 2006, AMD introduced the world's first 4-core server processor, where all 4 cores are grown on a single chip, and not “glued together” from two, like those of my business colleagues. Solved the most complex engineering problems - and at the development stage, and in production.

    In the same year, Intel changed the name of the Pentium brand to Core and released the Core 2 Duo dual-core chip.

    image

    Unlike the processor architecture of NetBurst (Pentium 4 and Pentium D), in Core 2 architecture, the emphasis was not on increasing the clock frequency, but on improving other processor parameters, such as cache, efficiency and number of cores. The dissipated power of these processors was significantly lower than that of the desktop Pentium line. With a TDP of 65 W, the Core 2 processor had the smallest power dissipation of all then available desktop microprocessors, including Prescott (Intel) cores with a TDP of 130 W and San Diego cores (AMD) with a TDP core 89 W.

    The first desktop quad-core processor was the Intel Core 2 Extreme QX6700 with a clock frequency of 2.67 GHz and 8 MB of second-level cache.

    In 2007, the Penryn 45-nanometer microarchitecture came out using lead-free Hi-k metal gates. The technology has been used in the Intel Core 2 Duo processor family. Support for SSE4 instructions has been added to the architecture, and the maximum size of the Level 2 cache in dual-core processors has increased from 4 MB to 6 MB.

    In 2008, the next-generation architecture was released - Nehalem. The processors have an integrated memory controller that supports 2 or 3 DDR3 SDRAM channels or 4 FB-DIMM channels. Replaced the FSB bus, a new QPI bus has arrived. The level 2 cache is reduced to 256 KB per core.

    Intel soon transferred the Nehalem architecture to a new 32-nm process technology. This line of processors is called Westmere.

    The first model of the new microarchitecture was Clarkdale, which has two cores and an integrated graphics core, produced using the 45-nm process technology.

    AMD tried to keep up with Intel. In 2007, it released a new generation of x86 microprocessor architecture - Phenom (K10).

    image

    Four processor cores were combined on a single chip. In addition to the 1 st and 2 nd level cache, the K10 models finally got L3 with a capacity of 2 MB. The size of the data cache and instructions of the 1st level was 64 KB each, and the cache memory of the 2nd level - 512 KB. Also appeared promising support for DDR3 memory controller. K10 used two 64-bit controllers. Each processor core had a 128-bit floating point unit. On top of that, the new processors worked through the HyperTransport 3.0 interface.

    In 2009, a multi-year conflict between Intel and AMD corporations, relating to patent law and antitrust laws, was completed. Thus, for almost ten years, Intel has used a number of dishonest decisions and methods that have hampered the fair development of competition in the semiconductor market. Intel put pressure on its partners, forcing them to abandon the acquisition of AMD processors. Bribing customers, granting large discounts and concluding agreements. As a result, Intel paid AMD $ 1.25 billion and pledged to follow a specific set of business rules for the next 5 years.

    By 2011, the era of the Athlons and the competition in the processor market had already passed into a lull, but it did not last long - in January, Intel introduced its new Sandy Bridge architecture, which became the ideological development of the first generation Core - an entire milestone that allowed blue giant to take the lead in the market. Fans of AMD have been waiting for a response from the Reds for quite a long time - only in October, the long-awaited Bulldozer appeared on the market - the return of AMD FX to the market, which was connected with breakthrough processors for the company of the beginning of the century.
    image
    The new architecture of AMD has taken on a lot - the confrontation with the best solutions from Intel (which later became legendary) cost the chip maker from Sunnyvale expensive. Already traditional for red bloated marketing, associated with loud statements and incredible promises, crossed all boundaries - “Bulldozer” was called a real revolution, and predicted architecture worthy of a battle against new products from a competitor. What has prepared FX to win the market?

    The bet on multi-threading and uncompromising multi-core - in 2011, AMD FX was proudly called “the most multi-core desktop processor on the market”, and this was not an exaggeration - the architecture was based on eight cores (albeit logical), each of which had one stream. At the time of the announcement of the architecture of the new FX on the background of the four cores of a competitor was an innovative and bold decision, looking far ahead. But alas, AMD has always relied on only one direction, and in the case of the Bulldozer this was by no means the area that the mass consumer was counting on.

    The productivity of AMD chips was very high, and in FX synthetics it was easy to show impressive results - unfortunately, it was impossible to say the same about game loads: a mode of 1-2 cores and the lack of support for normal parallelization of cores led to the “Bulldozer” with a big creak coped with the loads where Sandy Bridge did not even feel the difficulties. Add to this the whole two Achilles' heels of the series - dependence on fast memory and a rudimentary north bridge, as well as the presence of only one FPU block for every two cores - and the result is very deplorable. AMD FX was called a hot and unwieldy alternative to fast and powerful blue processors, which took only relative cheapness and compatibility with older motherboards. At first glance, it was a complete failure,

    The updated Bulldozer was named Piledriver, and the architecture itself added instructions, increased muscles in single-threaded loads, and optimized the work of a large number of cores, which increased multi-threaded performance. However, in those days, the competitor for the renewed and refreshed red series was the notorious Ivy Bridge, which only increased the number of Intel fans. AMD decided to act on the already run-in strategy of attracting budget users, the overall savings on components and the ability to get more for less money (without encroaching on the segment above).

    But the most amusing appearance in the history of the most unsuccessful (in the opinion of most) architecture in AMD's arsenal is that AMD FX sales can hardly be called not only a failure, but even mediocre - so, according to the 2016 Newegg store, AMD FX became the second most popular processor -6300 (losing only i7 6700k), and the notorious leader of the budget red segment FX-8350 entered the top five best-selling processors, a little behind the i7 4790k. At the same time, even relatively cheap i5, which was cited as an example of marketing success and “popular” status, significantly lagged behind the time-tested old men based on Piledriver.

    Finally, it is worth noting a rather amusing fact, which a few years ago was considered to be an excuse for AMD fans - this is about the opposition of the FX-8350 and i5 2500k, which originated during the Bulldozer release. For a long time it was believed that the red processor lags far behind the 2500k chosen by many enthusiasts, but in the fresh tests of 2017, paired up with the most powerful GPU, the FX-8350 turns out to be faster in almost all game tests. It would be appropriate to say "Hurray, wait!".

    And Intel, meanwhile, continues to conquer the market.

    image

    In 2011, a batch of new processors is being released on the Sandy Bridge architecture, for a new LGA 1155 socket released in the same year. This is the second generation of modern Intel processors, a complete update of the line that paved the way for commercial success for the company, because There were no analogues in terms of power per core and overclocking. Perhaps you remember the i5 2500K - a legendary processor, it accelerated to almost 5 GHz frequency, with corresponding tower cooling, and is capable even today, in 2017, to provide acceptable performance in a system with one, and possibly two video cards in modern games. On the hwbot.org resource, the processor overcame the frequency of 6014.1 megahertz from the Russian overclocker SAV. It was a 4 core processor with 6 MB cache level 3, the base frequency was only 3.3 GHz, nothing special, but due to solder, the processors of this generation accelerated very much and did not have overheating. Also i7 2600K and 2700K - 4 nuclear processors with hypertrending were absolutely successful in this generation, which gave them as many as 8 threads. Accelerated, however, they are slightly weaker, but had a higher performance, and therefore heat dissipation. They were taken under the system for fast and efficient video editing, as well as for broadcasting on the Internet. Interestingly, 2600K, like the i5 2500K, is also used today not only by gamers, but also by streamers. We can say that this generation has become a national treasure, since everyone wanted exactly the processors from Intel, which affected their price, not in the best way for the consumer. Also i7 2600K and 2700K - 4 nuclear processors with hypertrending were absolutely successful in this generation, which gave them as many as 8 threads. Accelerated, however, they are slightly weaker, but had a higher performance, and therefore heat dissipation. They were taken under the system for fast and efficient video editing, as well as for broadcasting on the Internet. Interestingly, 2600K, like the i5 2500K, is also used today not only by gamers, but also by streamers. We can say that this generation has become a national treasure, since everyone wanted exactly the processors from Intel, which affected their price, not in the best way for the consumer. Also i7 2600K and 2700K - 4 nuclear processors with hypertrending were absolutely successful in this generation, which gave them as many as 8 threads. Accelerated, however, they are slightly weaker, but had a higher performance, and therefore heat dissipation. They were taken under the system for fast and efficient video editing, as well as for broadcasting on the Internet. Interestingly, 2600K, like the i5 2500K, is also used today not only by gamers, but also by streamers. We can say that this generation has become a national treasure, since everyone wanted exactly the processors from Intel, which affected their price, not in the best way for the consumer. They were taken under the system for fast and efficient video editing, as well as for broadcasting on the Internet. Interestingly, 2600K, like the i5 2500K, is also used today not only by gamers, but also by streamers. We can say that this generation has become a national treasure, since everyone wanted exactly the processors from Intel, which affected their price, not in the best way for the consumer. They were taken under the system for fast and efficient video editing, as well as for broadcasting on the Internet. Interestingly, 2600K, like the i5 2500K, is also used today not only by gamers, but also by streamers. We can say that this generation has become a national treasure, since everyone wanted exactly the processors from Intel, which affected their price, not in the best way for the consumer.

    In 2012, Intel launches the 3rd generation of processors, called Ivy Bridge, which looks strange, after only a year has passed, have they really been able to invent something fundamentally new that would give a noticeable performance boost? As if not so, the new generation of processors is based on the same socket - LGA 1155, and processors of this generation are not much ahead of the previous ones, this is due, of course, to the fact that there was no competition in the top segment. All the same AMD, not to say, that would breathe tightly in the back of the first, because Intel could afford to produce processors a little more powerful than their own, because they actually became monopolists in the market. But then another trick has crept in, now in the form of a thermal interface under the lid, Intel did not use solder, but some of its own, as the people called it - chewing gum, this was done to save money, which brought even more income. This topic just blew up the network, it was no longer possible to overclock the processors to the eyeballs, because they received an average temperature of 10 degrees more than the previous ones, because the frequencies came closer to the border of 4 - 4.2 GHz. Special extremals even opened the processor lid, in order to replace the thermal paste with a more effective one, not everyone managed to do this without chipping the crystal or damaging the processor's contacts, but the method turned out to be effective. However, I can highlight some processors that were a success. It was not possible for everyone to do this without chip chips or damage to the processor’s contacts, but the method turned out to be effective. However, I can highlight some processors that were a success. It was not possible for everyone to do this without chip chips or damage to the processor’s contacts, but the method turned out to be effective. However, I can highlight some processors that were a success.

    You may have noticed that I did not mention i3, when telling about the second generation, this is due to the fact that processors of this power were not particularly popular. Everyone always wanted i5, who had money, of course, took i7.

    In the 3rd generation, which we will talk about now, the situation has not changed dramatically.
    Successful among this generation, i5 3340 and i5 3570K can be distinguished, they did not differ in performance, everything rested on the frequency, the cache was still the same - 6 MB, 3340 did not have overclocking capability, because 3570K was more desirable, but one the second provided good gaming performance. From i7 to 1155 it was the only 3770 with the K index with an 8 MB cache and a frequency of 3.5-3.9 GHz. In boost, it was usually overclocked to 4.2 - 4.5 GHz. Interestingly, in the same 2011, a new LGA 2011 socket was released, for which two i7 4820K super-processors (4 cores, 8 threads, with L3 cache - 10 MB) and i7 4930K (6 cores, 12 threads, L3 cache was released) equal to as much as 12 MB), what kind of monsters they were is hard to say, such a percentage cost 1000 bucks and was a dream of many schoolchildren at that time, although for games, of course, it was too powerful, more suitable for professional tasks.

    In 2013, Haswell is coming out, yes, yes, another year, another generation, according to tradition, a little more powerful than the previous one, because AMD failed again. It is known as the hottest generation. However, the i5 of this generation was pretty successful. This is due to the fact that, in my opinion, the guys from the Sandik ran to change their, as they thought, obsolete processes to the new “revolution” from Intel, from which all the Internet networks were then burned. The processors accelerated even worse than the previous generation, which is why many still dislike this generation. The performance of this generation was slightly higher than the previous one (by 15 percent, which is not much, but the monopoly does its job), and the overclocking limit is a good option for Intel to provide less “free” performance to the user.

    All i5s were traditionally without hyper-trading. We worked at a frequency of 3 to 3.9 GHz in boost, you could have taken any with the “K” index, as this guaranteed good performance, even if the acceleration was not very high. i7 there was at first only one, this is 4770K - 4 cores of 8 streams, 3.5 - 3.9 GHz, a workhorse, but it heats very well without a good cooling, I would not say that it was popular with scalpers, but people who scalped the lid, they say that the result is much better, on the water takes about 5 gigahertz, if you're lucky. This has concerned any processor since the time of the Sandik. However, this is not the end, in this generation there was such a Xeon E3-1231V3, which, in fact, was the same i7 4770, only without integrated graphics and overclocking. It is interesting because it was inserted into an ordinary mother with a socket 1150 and was much cheaper than the seventh one. A little later, i7 4790K comes out and it already has an improved thermal interface, but this is still not the same solder that it was before. Nevertheless, the processor accelerates more than 4770. They even talked about cases of 4.7 GHz overclocking in the air, of course, on a good cool.

    There are also “Monsters” of this generation (Haswell-E): i7-5960X Extreme Edition, i7-5930K and 5820K, server solutions adapted for the desktop market. These were the most stuffed-up processors at the time. They are based on a new 2011 v3 socket and cost a lot of money, but their performance is exceptional, which is not surprising, because the older processor has 16 whole streams and 20 MB of cache in the line. Pick up the jaw and go on.

    In 2015, Skylake comes out, on the 1151 socket, and everything seems to be almost the same performance, but this generation differs from all previous ones: firstly, by the reduced size of the heat distribution cover, for improved heat exchange with the cooling system on the processor, secondly, support for DDR4 memory and software support for DirectX 12, Open GL 4.4, Open CL 2.0, which indicates better performance in modern games that will use these APS. It also turned out that even processors without an K index can be overclocked, this was done using a memory bus, but this was quickly covered up. Whether this method works through crutches is unknown to us.

    There were few processors here, Intel again improved the business model, why release 6 processors, if 3-4 of the entire line are popular? So we will release 4 middle and 2 expensive segment processors. Personally, according to my observations, most often they take i5 6500 or 6600K, all the same 4 cores with 6 MB cache and turbo bus.

    In 2016, Intel introduced the fifth generation of processors - Broadwell-E. The Core i7-6950X was the world's first desktop decy-core processor in the world. The price of such a processor at the time of the start of sales was 1723 dollars. Many found it very strange such a move by Intel.

    On March 2, 2017, the new AMD Ryzen 7 high-end processors came on sale, including 3 models: 1800X, 1700X and 1700. As you already know, on February 22 of this year, the official presentation of Ryzen was held, on which Lisa Su said that engineers exceeded the forecast of 40%. In fact, Ryzen is ahead of Excavator by 52%, and given that more than half a year has passed since the start of sales of Ryzen, the release of new BIOS updates that increase performance and fix minor bugs in Zen architecture, we can say that this figure has grown to 60% . Today, the older Ryzen is the fastest eight-core processor in the world. And here one more assumption was confirmed. About ten-core Intel. In fact, this was the real and only answer for Ryzen. Intel stole the victory in advance from AMD, supposedly, whatever you released there, the fastest processor will stay with us anyway. And then at the presentation, Lisa Soo could not call Ryzen the absolute champion, but only the best of the eight cores. Such is the subtle trolling from Intel.

    image

    image

    Now AMD and Intel are introducing new flagship processors. In AMD, it is Ryzen Threadripper, in Intel - Core i9. The price of eighteen nuclear thirty six streamline flagship Intel Core i9-7980XE is about two thousand dollars. The price of sixteen thirty-two core Intel Core i9-7960X processors is $ 1,700, while a similar sixteen thirty-two core AMD Ryzen Threadripper 1950X costs about a thousand dollars. Make reasonable conclusions yourself, gentlemen.

    Video on this material: www.youtube.com/watch?v=PJmPBWQE8Uk&t Written by

    :
    RiddleRider
    Alexander Lis
    Blabber_mouth

    Only registered users can participate in the survey. Sign in , please.

    If you were building a new powerful computer, which processor would you prefer?


    Also popular now: