Life outside Zion
I was surprised to find that for most hawkers, it goes without saying that the most common Xeons are inside the server. All other processors are something far and almost non-existent, therefore the article “ Processors for corporations ” aroused very lively interest. Since the topic is so interesting, let's try to fill the information vacuum. So,
In one of these newsletters, Gartner clearly outlines three reasons to buy something other than x86:
Let's go in order.
The first reason is simple: recompiling code is always a risk, even if the platforms are similar, such as SPARC / Solaris and POWER / AIX. And if the platform does not look like anything, then “getting off” it is almost impossible.
Now, with the ubiquity of Java, x86 is increasingly being used as application server. Even Oracle, with its SPARC portfolio in its portfolio, focuses on Exalogic based on x86 servers as application servers. However, not all applications have been written in the last 3-5 years, therefore, banks often have applications for HP-UX, Solaris, and even System i. Although, of course, in this segment of servers on x86 - the vast majority.
Naturally, the question arises: maybe all these unusual processors are significantly more productive than Xeons?
In the world there are many tests of varying degrees of objectivity. Here, for example, is a command that can be executed on any * nix-machine:
And here are its results:

With this set of parameters, openssl simply calculates hashes and tracks how many such hashes it was possible to calculate in three seconds. More precisely, how much data was able to be processed when calculating with blocks of the corresponding length (the larger the block, the greater the total volume). Only the speed of the processor itself is tested - no buses, ports, peripherals are involved in the test.
In the picture, far from the whole selection I have, but if you look closely, you can see a lot of interesting things:
Of course, there is no more sense in this test than in the well-known exercise with a ruler. Moreover, the new SPARC processors (T4 and older) tear any Xeon like a Tuzik hot-water bottle: they have a number of hash algorithms implemented in hardware. But nevertheless it is clear - the matter is not in the speed of calculations. And in what?
For corporations, the overall server performance matters, which is achieved not only by the performance of one processor, but also by the number of these processors.
Did you know that PayPal started with large Sun servers? At first there was one server, then two, and only then, when PayPal realized that its servers had outgrown for a long time and any servers would soon outgrow, the applications were rewritten to work on clusters. There are not so many banks that have outgrown the Biggest Servers in the world, with only fifty. Why are there banks, if Tom Kite himself says that there is no database that can completely occupy a modern server ...
In general, the largest servers on the Xeon processor by Sun, HP and IBM today have 80 cores (8 crystals of 10 cores) , and Dell is limited to 40 cores.
At the same time, Sun released the SPARC Enterprise M9000 server back in July 2008.containing up to 256 SPARC64 VII cores (note: the server went on sale in April 2007, but until July 2008 only 128 SPARC64 VI cores could be installed in it). Competitors lagged behind for a long time, but in 2011 IBM released its POWER 795 server with 256 POWER 7 cores, and in 2013, HP pulled in with its Integrity Superdome .
Sun had difficult times, but they are behind. The SPARC line is developing by leaps and bounds. And while the size of the new Oracle servers is still far from the record (128 cores in SPARC Enterprise T5-8 or up to 192 cores in SPARC Enterprise M5-32 ), the record still belongs to the SPARC / Solaris platform: Fujitsu M10-4S , up to 1024 SPARC M10 cores .
An inquiring mind will immediately give an exampleSGI UV (up to 2048 Xeon cores) or Project Odyssey for x86 implantation in Superdome. However, you can’t see anything until the database servers on such machines. And why are they to application servers? ..
Finally, the third reason to buy RISC servers is reliability. Reliability, Availability, Serviceability. Unlike Google, which knows that the server can disappear at any time, and therefore it makes no sense to buy an expensive and reliable server, traditional relational DBMSs consider the server to be reliable.
Of course, there are all kinds of clustering technologies at various levels. However, if the server crashes, then, firstly, the restoration of the service on another cluster node will take some time (from several minutes to several hours), and secondly, there is always a risk that the restoration will be incomplete and / or incorrect. In general, the best defense is the one that didn't work out. Therefore, engineers are trying as they can protect the server from various surprises. The table below shows a set of such “surprises” on the example of Sun servers:

Pay attention to the following:
The server market has a fairly pronounced seasonality (chart according to the IDC Worldwide Quarterly Server Tracker):

Of course, the data is very approximate. Moreover, when you take the ballot for the first quarter of this year, where the figures are compared with the first quarter of last year, the figures for last year are different from last year's bulletin. If we normalize the schedule, it turns out something like this:

That is, HP and IBM, controlling about 30% of the market, periodically take the palm from each other, about 15% of the market from Dell and Oracle (Sun) + Fujitsu, and 10% - everyone else.
What do these astronomical sums consist of? And here’s what (the 2009 chart is based on the Gartner Market Trends: Transformation of Non-x86 Servers newsletter):

If you count the server in pieces, then the proportion of x86 will be approximately 98%. The share in monetary terms is growing steadily, breaking the 50% threshold in 2005.
RISC servers account for about a third of revenue, and of the “others” a significant part is the IBM System z and Fujitsu BS2000 mainframes (80% and 12%, respectively), and the rest is kind of exotic.
Let's now deal with the line of servers.
The largest set of rulers is the Blue Giant.
Firstly, these are mainframes leading their family tree from the famous System / 360 (from which the Single Series was copied). Then came System / 370, System / 390, zSeries, and finally System z, where “z” means “zero downtime.” This is completely your own world with its own processors up to 5 GHz, its own buses and multiple duplication of everything. Scope - core banking (i.e., key banking systems) and processing of plastic cards.
Secondly, “systems for small business” - this is exactly how it was positioned when creating AS / 400. Now the ABSs on this platform work in several Russian banks from the top 30. At different times, the platform was called AS / 400, iSeries (yes, for girls - iPhone, and for real boys - iSeries) and, finally, System i. The basis of the platform is a virtual machine that contains a relational database management system, user interface development tools, and everything else. Like a JVM, only better. As a result, applications written several decades ago run on modern servers without recompilation.
Thirdly, the AIX platform, leading its pedigree, like all commercial Unixes, from Unix System V. The servers at different times were called RS / 6000, pSeries, System p and POWER systems.
In terms of hardware, POWER Systems and System i are absolutely identical: they are RISC servers based on the POWER processor.
And finally, a line of servers based on x86 processors: PC Server, Netfinity, eServer xSeries, System x.
The company's wealth is based on the x86-based ProLiant line of servers purchased from Compaq.
The Integrity line is a server first on PA-RISC processors, and then Intel Itanium.
Widely known in narrow circles, the NonStop platform, identical in hardware to the Integrity Blade. However, talk about her in the next series.
Oracle has two lines of servers: SPARC Enterprise based on SPARC RISC processors and Sun x86 Servers (formerly SunFire X) based on x86. The SPARC line, in turn, consists of the T-series (formerly SunFire T) based on the "younger" SPARC T processors, the M-series (formerly SunFire E and SunFire V) based on the "older" SPARC M processors and Fujitsu SPARC servers , which can only be purchased from Oracle.
Dell releases the only line of x86-powered PowerEdge servers.
Now it's time to figure out what RISC, NUMA, CMT is ...
As you know, RISC stands for "Reduced Instruction Set Computer". Usually, this translates into Russian as “a computer with a limited set of instructions”, but this is not entirely true. Rather, "a computer with a set of limited instructions."
Let's compare the same ADD command in Intel 80486 processors (typical CISC, Complex Instruction Set Computer) and SPARC V7:
80486 has many options for addressing the command parameters, while the command itself has a variable length and different execution times.
Life has shown the advantage of RISC over CISC: it is easier to predict transitions, shorter than the internal pipeline, therefore, less overhead when switching tasks. Why does the x86 family show good performance? And because, starting with the Pentium Pro processor, released in 1995, the complex x86 instruction system is translated into a set of simple RISC kernel instructions before being executed.
Itanium is included in RISC in all statistical calculations, but in fact it is VLIW (Very Long Instruction Word), or rather EPIC (Explicit Parallel Instruction Computer). That's really what you call a yacht ...
The essence of this architecture is that the processor has several calculators, and the control word contains instructions for all calculators at once. For example, if we have such a program
then the following microcode will be generated for it:
Parallelization is performed not by the processor, but by the compiler. As a result, writing a code generator becomes a daunting task.
Compared to VLIW, EPIC contains a number of improvements: prefetching commands, speculative execution, a marker of the dependence of the next operation on the current ... But the essence is the same.
SMP is a Symmetric Multiprocessor, that is, an architecture in which each processor has access to each shared memory cell. NUMA - Non-unified Memory Access, i.e., the processor has faster access to its memory and less faster access to someone else’s:

On the left is SMP, on the right is NUMA
It is clear that there are no miracles, and the exchange rate with memory located on the same board as the processor is higher than with memory on another board. That is, from the point of view of physics, any sufficiently large machine in which there is more than one processor board is NUMA. The difference boils down to the fact that in SMP-systems (AIX, Solaris) the operating system takes care of optimizing the placement of pages in physical memory, and in NUMA (HP-UX) the application can do this optimization through a special API. As practice shows, applied optimization in this case does not justify itself.
There is another term from this area - CMT, Chip Multithreading. This applies to the T-line of SPARC processors. In fact, this is the same SMP, but bus bandwidth is significantly limited. It is assumed that each core refers to its own memory area, which is typical for application servers, but uncharacteristic for a DBMS. Therefore, before T3, Sun officially did not recommend running DBMSs on T-series servers, Oracle already worked well on T4, but the T5 line is already full-fledged SMP servers.
So we got to the most interesting thing - industrial RISC servers (schedule for 2009 based on the materials of the Gartner newsletter “Market Trends: Transformation of Non-x86 Servers”):

If we talk about Unix servers, there is an unambiguous correspondence “manufacturer - platform name - processor - OS ":
Well, now a little history ...
In 1980, the RISC project was opened at the University of Berkley, culminating in 1982 with the advent of the RISC I processor. The processor itself did not go into the industrial series, but it spawned many followers.

One of the first followers was MIPS. These processors were used to assemble the famous Silicon Graphics graphic stations, which painted the harsh Russian reality of the 90s in Pelevin's novel Generation P. They also built the Tandem NonStop server - a horizontally scalable, highly reliable database and application platform. Subsequently, Tandem was bought by Compaq, and they tried to remake it on Alpha, but to no avail. When Compaq was bought by HP, NonStop switched to Itanium. Today, MIPS is found only in industrial computers.
ARM company is associated with us more likely with portable devices - phones, tablets and other iPhones. However, HP is running a Moonshot project to create low-power servers. Prototypes are made on ARM processors, but Intel Atom will most likely go on sale.
All DEC Alpha servers with experience remember with warmth. In the 90s, it would never have occurred to anyone to compare these high-performance 64-bit RISC processors with x86. By the way, one of the first search engines, Altavista, was created by Digital to demonstrate the power of Alpha processors to the world.
As it happens with many good technologies, Alpha was out of luck: in 1999, Digital was bought by Compaq, and in 2001 Compaq decided to switch to the “industry standard” - Itanium.
HP itself before the purchase of Compaq also created its own RISC processor - PA-RISC. Based on this processor, HP Integrity servers were produced, which still serve in some places, despite the fact that PA-RISC was discontinued on December 31, 2008. Even in the form of spare parts.
In 2001, a joint initiative was announced by HP and Intel to create a "unified industry standard for servers." The architecture was called IA64 (as opposed to IA32, which is nothing but x86), the brand is Itanium. True, evil tongues immediately renamed it Itanic. Intel made one mistake after another: firstly, the choice of EPIC architecture can hardly be considered a good solution. Secondly, support for x86 instructions was announced in the processor: companies bought new servers in the expectation that until they run the old software designed for x86, until they write new ones. But the old one worked poorly, not at all for the money that was paid for new equipment, and no one was in a hurry to write a new one.
The release of servers on the new platform was launched by IBM, Dell and HP. However, Intel’s constant breakdowns in the development of new chips led to the fact that IBM returned to POWER, and Dell focused on x86. HP transferred to Itanium all the platforms that came to him as a result of development and a series of acquisitions: HP-UX, OpenVMS and NonStop.
Now, especially after Oracle’s high-profile announcement, Itanium is slowly bending. And after HP buried so many interesting platforms, it seems fair ...
SPARC (Scalable Parallel ARChitecture) is an open standard developed by Sun. The SPARC standard is registered in IEEE under the number 1754-1994, and it is developed by the organization SPARC International, Inc., founded in 1989. Everyone can produce SPARC processors, for this it is enough to buy a license worth 99 dollars. Over the years, the release of SPARC processors involved Texas Instruments, Ross Technology and even the Soviet company MTsST (Moscow center of spark technologies). But the main manufacturers of processors and servers are, of course, Sun Microsystems and Fujitsu.

In the figure, green indicates the processors that correspond to the seventh version of the specification (SPARC V7), orange - SPARC V8, red - SPARC V9. Yellow fill means multi-core chip.
So, the UltraSPARC line has evolved until 2004. After that, Sun began working on a brand new UltraSPARC V processor, code-named Rock. It came to making a prototype and testing, but suddenly the work was stopped.
In parallel, in 2005, a new Niagara processor appeared. It was intended for application servers. He was distinguished
Time has shown that applications still need computing resources, so Xeon has firmly taken this niche, displacing the T-series from there.
Actually, all SPARC servers that were released until recently (M3000, M5000, M8000, M9000) are Fujitsu. Well, you ask, did Sun control them? And it’s very simple: nobody needs a server without an operating system, and Sun has the rights to Solaris. Here is a mutually beneficial cooperation.
Just a couple of years ago, analysts predicted the imminent death of SPARC, especially after the Fukushima accident. It seemed that Oracle buried this area, believing that the RAC on x86 will replace large servers. However, this did not happen, and it was necessary to develop what was, i.e., the T-series.
Success, I must say, is simply amazing. Today, Oracle has a core code-named S3, on the basis of which all modern SPARC processors are built. T5 differs from T4 in the increased clock frequency (3.6 GHz versus 3.0 GHz), the number of cores on the chip (16 versus 8) and the number of external links. As a result, the largest server on the T4 processor consists of 4 crystals (T4-4, 4 × 8 = 32 cores), and on T5 it consists of 8 crystals (T5-8, 8 × 16 = 128 cores). M5 is the same T5 with a large number of external interprocessor links and cache, but with fewer cores. The maximum server is M5-32 (32 × 6 = 192 cores).
Oracle SPARC processors have a lot of interesting properties: depending on the load, the number of threads executed by one core is automatically adjusted (from 1 to 8), and with a decrease in the number of cores involved, the clock speed increases. A number of hashing and encryption algorithms are implemented in hardware.
Fujitsu, after a long break, also released a new processor - the M10. This is the development of the SPARC64 line: the clock frequency has been increased, the floating-point unit has been redesigned ... In addition, the server architecture has been changed. Now they can be assembled from blocks, like IBM POWER 770. As a result, up to 1024 cores (2048 threads) can be in one server.
Today, Oracle is the only Big Three company to have a distinct roadmap for the development of its RISC processors. On the official website you can take a simple picture for presentation . In lectures and seminars, Oracle talks about the new kernel, which is being tested. In addition, the software in silicon program has been proclaimed, that is, hardware support, for example, data types and Oracle Database compression algorithms.
In 1985, the Thomas Watson Research Center (the one after which the Watson supercomputer is named) launched the America project to develop a second-generation RISC processor. In 1986, the IBM Austin office began work on the RS / 6000 series; In 1990, the first computers with a POWER architecture processor - RS / 6000 (RISC System / 6000) were released.

In the picture, 32-bit processors are marked in orange, 64-bit in red. Yellow shading indicates a multi-core chip.
In the same year, the AIM Alliance (Apple, IBM, Motorola) was formed to develop a PowerPC processor. The processor survived several incarnations, and in 2005 Freescale Semiconductor (what was left of Motorola) founded power.org, an alliance for the development of the PowerPC architecture. Anyone can join it.
There is also a PowerPC64 processor, which is used mainly in game consoles - Sony PlayStation, Nintendo Wii, Microsoft Xbox 360 ...
As for the main branch, today POWER 7 is the most productive of RISC processors. Each core is capable of processing from 1 to 4 threads depending on the mode (SMT1, SMT2, SMT4), which, unlike SPARC, is selected statically. In addition, there is the possibility, supported by the POWER 780 machine, to turn on the “turbo mode” - increasing the clock frequency of half the cores by turning off the second half.
The largest server is POWER 795, 256 cores, up to 1024 threads.
POWER 8 will be, but when and with what is unknown ...
If the article gains enough advantages, then the next series will be devoted to cluster relational DBMSs, of which there are as many as 7 pieces on the market. In the next series:
Who buys them and why?
In one of these newsletters, Gartner clearly outlines three reasons to buy something other than x86:
- platform-specific code;
- performance;
- reliability.
Let's go in order.
Country ELF'ov
The first reason is simple: recompiling code is always a risk, even if the platforms are similar, such as SPARC / Solaris and POWER / AIX. And if the platform does not look like anything, then “getting off” it is almost impossible.
Now, with the ubiquity of Java, x86 is increasingly being used as application server. Even Oracle, with its SPARC portfolio in its portfolio, focuses on Exalogic based on x86 servers as application servers. However, not all applications have been written in the last 3-5 years, therefore, banks often have applications for HP-UX, Solaris, and even System i. Although, of course, in this segment of servers on x86 - the vast majority.
Size matters
Naturally, the question arises: maybe all these unusual processors are significantly more productive than Xeons?
In the world there are many tests of varying degrees of objectivity. Here, for example, is a command that can be executed on any * nix-machine:
openssl speed md5And here are its results:

With this set of parameters, openssl simply calculates hashes and tracks how many such hashes it was possible to calculate in three seconds. More precisely, how much data was able to be processed when calculating with blocks of the corresponding length (the larger the block, the greater the total volume). Only the speed of the processor itself is tested - no buses, ports, peripherals are involved in the test.
In the picture, far from the whole selection I have, but if you look closely, you can see a lot of interesting things:
- Quite a server processor UltraSPARC IV, released in 2006, in computing power is only slightly superior to ARM, which can stand in any mobile phone today.
- The computing power of industrial RISC processors 5-6 years ago is approximately the same; Today, IBM POWER leads.
- Xeon not only does not lag behind, but also surpasses traditional RISC processors.
Of course, there is no more sense in this test than in the well-known exercise with a ruler. Moreover, the new SPARC processors (T4 and older) tear any Xeon like a Tuzik hot-water bottle: they have a number of hash algorithms implemented in hardware. But nevertheless it is clear - the matter is not in the speed of calculations. And in what?
For corporations, the overall server performance matters, which is achieved not only by the performance of one processor, but also by the number of these processors.
Did you know that PayPal started with large Sun servers? At first there was one server, then two, and only then, when PayPal realized that its servers had outgrown for a long time and any servers would soon outgrow, the applications were rewritten to work on clusters. There are not so many banks that have outgrown the Biggest Servers in the world, with only fifty. Why are there banks, if Tom Kite himself says that there is no database that can completely occupy a modern server ...
In general, the largest servers on the Xeon processor by Sun, HP and IBM today have 80 cores (8 crystals of 10 cores) , and Dell is limited to 40 cores.
At the same time, Sun released the SPARC Enterprise M9000 server back in July 2008.containing up to 256 SPARC64 VII cores (note: the server went on sale in April 2007, but until July 2008 only 128 SPARC64 VI cores could be installed in it). Competitors lagged behind for a long time, but in 2011 IBM released its POWER 795 server with 256 POWER 7 cores, and in 2013, HP pulled in with its Integrity Superdome .
Sun had difficult times, but they are behind. The SPARC line is developing by leaps and bounds. And while the size of the new Oracle servers is still far from the record (128 cores in SPARC Enterprise T5-8 or up to 192 cores in SPARC Enterprise M5-32 ), the record still belongs to the SPARC / Solaris platform: Fujitsu M10-4S , up to 1024 SPARC M10 cores .
An inquiring mind will immediately give an exampleSGI UV (up to 2048 Xeon cores) or Project Odyssey for x86 implantation in Superdome. However, you can’t see anything until the database servers on such machines. And why are they to application servers? ..
RAS, two, three ...
Finally, the third reason to buy RISC servers is reliability. Reliability, Availability, Serviceability. Unlike Google, which knows that the server can disappear at any time, and therefore it makes no sense to buy an expensive and reliable server, traditional relational DBMSs consider the server to be reliable.
Of course, there are all kinds of clustering technologies at various levels. However, if the server crashes, then, firstly, the restoration of the service on another cluster node will take some time (from several minutes to several hours), and secondly, there is always a risk that the restoration will be incomplete and / or incorrect. In general, the best defense is the one that didn't work out. Therefore, engineers are trying as they can protect the server from various surprises. The table below shows a set of such “surprises” on the example of Sun servers:

Pay attention to the following:
- The larger (and more expensive) the server, the higher the degree of protection. Therefore, even if relatively small servers are needed, in terms of reliability it is better to buy large and cut them into domains.
- The x86 server security level (the first column is the Sun X2-8 server) roughly corresponds to midrange RISC servers. HP's x86 servers have better performance, but they still aren’t up to Superdome
How many buy them?
The server market has a fairly pronounced seasonality (chart according to the IDC Worldwide Quarterly Server Tracker):

Of course, the data is very approximate. Moreover, when you take the ballot for the first quarter of this year, where the figures are compared with the first quarter of last year, the figures for last year are different from last year's bulletin. If we normalize the schedule, it turns out something like this:

That is, HP and IBM, controlling about 30% of the market, periodically take the palm from each other, about 15% of the market from Dell and Oracle (Sun) + Fujitsu, and 10% - everyone else.
What do these astronomical sums consist of? And here’s what (the 2009 chart is based on the Gartner Market Trends: Transformation of Non-x86 Servers newsletter):

If you count the server in pieces, then the proportion of x86 will be approximately 98%. The share in monetary terms is growing steadily, breaking the 50% threshold in 2005.
RISC servers account for about a third of revenue, and of the “others” a significant part is the IBM System z and Fujitsu BS2000 mainframes (80% and 12%, respectively), and the rest is kind of exotic.
And what about the market?
Let's now deal with the line of servers.
Ibm
The largest set of rulers is the Blue Giant.
Firstly, these are mainframes leading their family tree from the famous System / 360 (from which the Single Series was copied). Then came System / 370, System / 390, zSeries, and finally System z, where “z” means “zero downtime.” This is completely your own world with its own processors up to 5 GHz, its own buses and multiple duplication of everything. Scope - core banking (i.e., key banking systems) and processing of plastic cards.
Secondly, “systems for small business” - this is exactly how it was positioned when creating AS / 400. Now the ABSs on this platform work in several Russian banks from the top 30. At different times, the platform was called AS / 400, iSeries (yes, for girls - iPhone, and for real boys - iSeries) and, finally, System i. The basis of the platform is a virtual machine that contains a relational database management system, user interface development tools, and everything else. Like a JVM, only better. As a result, applications written several decades ago run on modern servers without recompilation.
Thirdly, the AIX platform, leading its pedigree, like all commercial Unixes, from Unix System V. The servers at different times were called RS / 6000, pSeries, System p and POWER systems.
In terms of hardware, POWER Systems and System i are absolutely identical: they are RISC servers based on the POWER processor.
And finally, a line of servers based on x86 processors: PC Server, Netfinity, eServer xSeries, System x.
HP
The company's wealth is based on the x86-based ProLiant line of servers purchased from Compaq.
The Integrity line is a server first on PA-RISC processors, and then Intel Itanium.
Widely known in narrow circles, the NonStop platform, identical in hardware to the Integrity Blade. However, talk about her in the next series.
Sun, Oracle and a little Fujitsu
Oracle has two lines of servers: SPARC Enterprise based on SPARC RISC processors and Sun x86 Servers (formerly SunFire X) based on x86. The SPARC line, in turn, consists of the T-series (formerly SunFire T) based on the "younger" SPARC T processors, the M-series (formerly SunFire E and SunFire V) based on the "older" SPARC M processors and Fujitsu SPARC servers , which can only be purchased from Oracle.
Dell
Dell releases the only line of x86-powered PowerEdge servers.
Words words...
Now it's time to figure out what RISC, NUMA, CMT is ...
RISC vs CISC
As you know, RISC stands for "Reduced Instruction Set Computer". Usually, this translates into Russian as “a computer with a limited set of instructions”, but this is not entirely true. Rather, "a computer with a set of limited instructions."
Let's compare the same ADD command in Intel 80486 processors (typical CISC, Complex Instruction Set Computer) and SPARC V7:
| Intel 80486 | SPARC v7 | |
| Types of Operands | al (ax, eax), imm8 (16.32) r / m8 (16.32), imm8 (16.32) r / m16 (32), imm8 r / m8 (16.32), r8 (16.32 ) r8 (16.32), r / m8 (16.32) | r, r, r r, r, d12 |
| Team length | 2 - 9 | 4 |
| The number of measures | thirteen | 1 |
80486 has many options for addressing the command parameters, while the command itself has a variable length and different execution times.
Life has shown the advantage of RISC over CISC: it is easier to predict transitions, shorter than the internal pipeline, therefore, less overhead when switching tasks. Why does the x86 family show good performance? And because, starting with the Pentium Pro processor, released in 1995, the complex x86 instruction system is translated into a set of simple RISC kernel instructions before being executed.
Itanium is included in RISC in all statistical calculations, but in fact it is VLIW (Very Long Instruction Word), or rather EPIC (Explicit Parallel Instruction Computer). That's really what you call a yacht ...
The essence of this architecture is that the processor has several calculators, and the control word contains instructions for all calculators at once. For example, if we have such a program
R2 = R3 + R4 R1 = R5 + R6 R7 = R2 + R1
then the following microcode will be generated for it:
R2 = R3 + R4; R1 = R5 + R6 R7 = R1 + R2; nop
Parallelization is performed not by the processor, but by the compiler. As a result, writing a code generator becomes a daunting task.
Compared to VLIW, EPIC contains a number of improvements: prefetching commands, speculative execution, a marker of the dependence of the next operation on the current ... But the essence is the same.
SMP vs NUMA
SMP is a Symmetric Multiprocessor, that is, an architecture in which each processor has access to each shared memory cell. NUMA - Non-unified Memory Access, i.e., the processor has faster access to its memory and less faster access to someone else’s:

On the left is SMP, on the right is NUMA
It is clear that there are no miracles, and the exchange rate with memory located on the same board as the processor is higher than with memory on another board. That is, from the point of view of physics, any sufficiently large machine in which there is more than one processor board is NUMA. The difference boils down to the fact that in SMP-systems (AIX, Solaris) the operating system takes care of optimizing the placement of pages in physical memory, and in NUMA (HP-UX) the application can do this optimization through a special API. As practice shows, applied optimization in this case does not justify itself.
There is another term from this area - CMT, Chip Multithreading. This applies to the T-line of SPARC processors. In fact, this is the same SMP, but bus bandwidth is significantly limited. It is assumed that each core refers to its own memory area, which is typical for application servers, but uncharacteristic for a DBMS. Therefore, before T3, Sun officially did not recommend running DBMSs on T-series servers, Oracle already worked well on T4, but the T5 line is already full-fledged SMP servers.
And actually about RISC servers
So we got to the most interesting thing - industrial RISC servers (schedule for 2009 based on the materials of the Gartner newsletter “Market Trends: Transformation of Non-x86 Servers”):

If we talk about Unix servers, there is an unambiguous correspondence “manufacturer - platform name - processor - OS ":
| Company | Ibm | HP | Oracle |
| Ruler | Power systems | Integrity | SPARC Enterprise |
| CPU | Power | Intel Itanium | SPARC |
| OS | Aix | HP-UX | Solaris |
| Architecture | SMP / RISC | NUMA / EPIC | SMP / RISC |
Well, now a little history ...
Where did RISC go
In 1980, the RISC project was opened at the University of Berkley, culminating in 1982 with the advent of the RISC I processor. The processor itself did not go into the industrial series, but it spawned many followers.

HP
One of the first followers was MIPS. These processors were used to assemble the famous Silicon Graphics graphic stations, which painted the harsh Russian reality of the 90s in Pelevin's novel Generation P. They also built the Tandem NonStop server - a horizontally scalable, highly reliable database and application platform. Subsequently, Tandem was bought by Compaq, and they tried to remake it on Alpha, but to no avail. When Compaq was bought by HP, NonStop switched to Itanium. Today, MIPS is found only in industrial computers.
ARM company is associated with us more likely with portable devices - phones, tablets and other iPhones. However, HP is running a Moonshot project to create low-power servers. Prototypes are made on ARM processors, but Intel Atom will most likely go on sale.
All DEC Alpha servers with experience remember with warmth. In the 90s, it would never have occurred to anyone to compare these high-performance 64-bit RISC processors with x86. By the way, one of the first search engines, Altavista, was created by Digital to demonstrate the power of Alpha processors to the world.
As it happens with many good technologies, Alpha was out of luck: in 1999, Digital was bought by Compaq, and in 2001 Compaq decided to switch to the “industry standard” - Itanium.
HP itself before the purchase of Compaq also created its own RISC processor - PA-RISC. Based on this processor, HP Integrity servers were produced, which still serve in some places, despite the fact that PA-RISC was discontinued on December 31, 2008. Even in the form of spare parts.
In 2001, a joint initiative was announced by HP and Intel to create a "unified industry standard for servers." The architecture was called IA64 (as opposed to IA32, which is nothing but x86), the brand is Itanium. True, evil tongues immediately renamed it Itanic. Intel made one mistake after another: firstly, the choice of EPIC architecture can hardly be considered a good solution. Secondly, support for x86 instructions was announced in the processor: companies bought new servers in the expectation that until they run the old software designed for x86, until they write new ones. But the old one worked poorly, not at all for the money that was paid for new equipment, and no one was in a hurry to write a new one.
The release of servers on the new platform was launched by IBM, Dell and HP. However, Intel’s constant breakdowns in the development of new chips led to the fact that IBM returned to POWER, and Dell focused on x86. HP transferred to Itanium all the platforms that came to him as a result of development and a series of acquisitions: HP-UX, OpenVMS and NonStop.
Now, especially after Oracle’s high-profile announcement, Itanium is slowly bending. And after HP buried so many interesting platforms, it seems fair ...
SPARC
SPARC (Scalable Parallel ARChitecture) is an open standard developed by Sun. The SPARC standard is registered in IEEE under the number 1754-1994, and it is developed by the organization SPARC International, Inc., founded in 1989. Everyone can produce SPARC processors, for this it is enough to buy a license worth 99 dollars. Over the years, the release of SPARC processors involved Texas Instruments, Ross Technology and even the Soviet company MTsST (Moscow center of spark technologies). But the main manufacturers of processors and servers are, of course, Sun Microsystems and Fujitsu.

In the figure, green indicates the processors that correspond to the seventh version of the specification (SPARC V7), orange - SPARC V8, red - SPARC V9. Yellow fill means multi-core chip.
So, the UltraSPARC line has evolved until 2004. After that, Sun began working on a brand new UltraSPARC V processor, code-named Rock. It came to making a prototype and testing, but suddenly the work was stopped.
In parallel, in 2005, a new Niagara processor appeared. It was intended for application servers. He was distinguished
- low clock frequency (the application does not need a lot of computing power);
- a large (16) number of nuclei per crystal;
- a simplified bus that worked well only when each core accessed its own memory area.
Time has shown that applications still need computing resources, so Xeon has firmly taken this niche, displacing the T-series from there.
Actually, all SPARC servers that were released until recently (M3000, M5000, M8000, M9000) are Fujitsu. Well, you ask, did Sun control them? And it’s very simple: nobody needs a server without an operating system, and Sun has the rights to Solaris. Here is a mutually beneficial cooperation.
Just a couple of years ago, analysts predicted the imminent death of SPARC, especially after the Fukushima accident. It seemed that Oracle buried this area, believing that the RAC on x86 will replace large servers. However, this did not happen, and it was necessary to develop what was, i.e., the T-series.
Success, I must say, is simply amazing. Today, Oracle has a core code-named S3, on the basis of which all modern SPARC processors are built. T5 differs from T4 in the increased clock frequency (3.6 GHz versus 3.0 GHz), the number of cores on the chip (16 versus 8) and the number of external links. As a result, the largest server on the T4 processor consists of 4 crystals (T4-4, 4 × 8 = 32 cores), and on T5 it consists of 8 crystals (T5-8, 8 × 16 = 128 cores). M5 is the same T5 with a large number of external interprocessor links and cache, but with fewer cores. The maximum server is M5-32 (32 × 6 = 192 cores).
Oracle SPARC processors have a lot of interesting properties: depending on the load, the number of threads executed by one core is automatically adjusted (from 1 to 8), and with a decrease in the number of cores involved, the clock speed increases. A number of hashing and encryption algorithms are implemented in hardware.
Fujitsu, after a long break, also released a new processor - the M10. This is the development of the SPARC64 line: the clock frequency has been increased, the floating-point unit has been redesigned ... In addition, the server architecture has been changed. Now they can be assembled from blocks, like IBM POWER 770. As a result, up to 1024 cores (2048 threads) can be in one server.
Today, Oracle is the only Big Three company to have a distinct roadmap for the development of its RISC processors. On the official website you can take a simple picture for presentation . In lectures and seminars, Oracle talks about the new kernel, which is being tested. In addition, the software in silicon program has been proclaimed, that is, hardware support, for example, data types and Oracle Database compression algorithms.
Power
In 1985, the Thomas Watson Research Center (the one after which the Watson supercomputer is named) launched the America project to develop a second-generation RISC processor. In 1986, the IBM Austin office began work on the RS / 6000 series; In 1990, the first computers with a POWER architecture processor - RS / 6000 (RISC System / 6000) were released.

In the picture, 32-bit processors are marked in orange, 64-bit in red. Yellow shading indicates a multi-core chip.
In the same year, the AIM Alliance (Apple, IBM, Motorola) was formed to develop a PowerPC processor. The processor survived several incarnations, and in 2005 Freescale Semiconductor (what was left of Motorola) founded power.org, an alliance for the development of the PowerPC architecture. Anyone can join it.
There is also a PowerPC64 processor, which is used mainly in game consoles - Sony PlayStation, Nintendo Wii, Microsoft Xbox 360 ...
As for the main branch, today POWER 7 is the most productive of RISC processors. Each core is capable of processing from 1 to 4 threads depending on the mode (SMT1, SMT2, SMT4), which, unlike SPARC, is selected statically. In addition, there is the possibility, supported by the POWER 780 machine, to turn on the “turbo mode” - increasing the clock frequency of half the cores by turning off the second half.
The largest server is POWER 795, 256 cores, up to 1024 threads.
POWER 8 will be, but when and with what is unknown ...
Instead of a conclusion
If the article gains enough advantages, then the next series will be devoted to cluster relational DBMSs, of which there are as many as 7 pieces on the market. In the next series:
- How do Big Companies circumvent the CAP theorem?
- Why was Oracle RAC on small nodes unable to replace large servers?
- Where to go if the entire infrastructure is deployed on the Microsoft platform?
- What do the processing centers of the largest banks work on?
- What will happen if overeat green plums?
- And much more...