
The history of the assembly of the "village supercomputer" of spare parts from eBay, Aliexpress and a computer store. Part 2
Good day, dear Khabrovchane!
Link to the first part of the story for those who missed it.
I want to continue my story about the assembly of the "village supercomputer." And I will explain why it is so named - the reason is simple. I myself live in a village. And the name is light trolling over those who shout on the Internet “There is no life beyond the MKAD!”, “The Russian village has drunk and is dying!” So, somewhere it may be so, and I will be an exception to the rule. I don’t drink, don’t smoke, I do things that not every “city creak (c)” has in mind and pocket. But back to our sheep, or rather, the server, which at the end of the first part of the article already "showed signs of life."
The board lay on the table, I climbed through the BIOS to configure it to my taste, dashed Ubuntu 16.04 Desktop for simplicity and decided to connect a video card to the "super-machine". But at hand was only the GTS 250 with a hefty hefty fan sticking in. Which I installed in the PCI-E 16x slot near the power button.


"I shot on a pack of Belomor (s)" so please do not kick for the quality of the photo. I’d better comment on what is captured on them.
Firstly - it turned out that when installed in the slot - even a short video card rests on the memory card slots, which in this case cannot be installed and you even have to lower the latches. Secondly, the iron mounting plate of the video card closes the power button, so it had to be removed. By the way, the power button itself is illuminated by a two-color LED that lights green when everything is in order and blinks orange if there are any problems, short circuit and the power supply protection has tripped or + 12VSB power is too high or too low.
In fact, this motherboard is not designed to include video cards “directly” in its PCI-E 16x slots, they all connect on risers. To install the expansion card in the slots near the power button, there are angular risers, low for installing short cards up to the first processor heat sink, and high angular with an additional + 12V connector for installing the video card "above" the standard low 1U cooler. It can include large video cards like the GTX 780, GTX 980, GTX 1080, or specialized Nvidia Tesla K10-K20-K40 GPGPU cards, or Intel Xeon Phi 5110p "computing cards" and the like.
But in the GPGPU riser, the card included in EdgeSlot can be turned on directly, only again by connecting additional power with the same connector as on the high angular riser. Who cares - on eBay this flexible riser is called "Dell PowerEdge C8220X PCI-E GPGPU DJC89" and costs about 2.5-3 thousand rubles. Angled risers with additional power supply are much rarer and I had to agree to buy them from a specialized server parts store through Shopotam. They went out at 7 thousand apiece.
I’ll say right away that “risky guys (tm)” can even connect a pair of GTX 980 to the board with 16x Chinese flexible risers as one person did on “Tom Forum itself”, by the way, the Chinese are doing quite good work on PCI-E 16x 2.0 crafts in the style of ThermalTech flexible risers, but if you get burned out one day the power circuits on the server board - you will only blame yourself. I did not risk expensive equipment and used original risers with additional power and one Chinese flexible one, considering that connecting one card “directly” would not burn the board.
Then the long-awaited connectors for connecting additional power arrived and I made a tail for my riser in EdgeSlot. And the same connector, but with a different pinout, is used to supply additional power to the motherboard. This connector is just near this EdgeSlot connector, there is an interesting pinout. If on the riser there are 2 wires +12 and 2 common, then on the board there are 3 wires +12 and 1 common.

That's actually the same GTS 250 is included in the GPGPU-riser. By the way, additional power is taken from me on the risers and the motherboard - from the second power connector + 12V CPU of my PSU. I decided that this would be done more correctly.
The fairy tale quickly affects, but slowly sending to Russia from China and other places of the globe travel. Therefore, in the assembly of the "supercomputer" there were large gaps. But finally, the server Nvidia Tesla K20M with a passive radiator came to me. Moreover, it is completely zero, from storage, sealed in its own box, in its own package, with guarantee papers. And the suffering began how to cool her?
First, a custom cooler with two small “turbines” was bought from England, here it is in the photo, with a homemade cardboard diffuser.



And they were full of crap. They made a lot of noise, the mount did not fit at all, they blew weakly and gave such a vibration that I was afraid that the components would fall off the Tesla board! Why they were sent to the trash almost immediately.
By the way, in the photo near Tesla you can see LGA 2011 1U server copper heatsinks installed on processors with a snail from Coolerserver purchased from Aliexpress. Very worthy though noisy coolers. Perfect fit.
But actually while I was waiting for a new cooler for Tesla, this time ordering from Australia a large snail BFB1012EN with a mount printed on a 3D printer, it came to the server data storage system. On the server board there is a mini-SAS connector through which 4 SATA and 2 more SATA are connected with regular connectors. All SATA standard 2.0 but it suits me.
The integrated intel RAID C602 in the chipset is not bad, and most importantly it misses the TRIM command for SSDs, which many low-cost external RAID controllers do not know.
On eVau I bought a meter-long mini-SAS to 4 SATA cable, and on Avito I bought a 5.25 "hot-swap basket with a 4 x 2.5" SAS-SATA cable. So when the cable and the basket arrived, 4 terabyte Seigates were installed in it, the RAID5 was built into 4 devices in the BIOS, I started installing the server Ubuntu ... and I came across the fact that the disk partitioning program did not allow the swap partition to be made on the raid.
I solved the problem head on - I bought the ASUS HYPER M.2 x 4 MINI and M.2 SSD Samsung 960 EVO 250 Gb adapter in DNS PCI-E to M.2, decide that you need to select the fastest device for swap, since the system will work with high computational load, and the memory is still obviously less than the size of the data. Yes, and 256 GB memory was more expensive than this SSD.

Here is the same adapter with an installed SSD in a low angle riser.
Anticipating the questions - “Why not make the entire system on M.2 and have a maximum access speed higher than that of the SATA raid?” - I will answer. Firstly, for 1 or more TB M2 SSDs are too expensive for me. Secondly - even after updating the BIOS to the latest version 2.8.1, the server still does not support loading devices into M.2 NVE. I did the experience when the system installed / boot on USB FLASH 64 Gb and everything else on the M.2 SSD, but I did not like it. Although, in principle, such a bundle is quite functional. If the M.2 NVEs of large capacity become cheaper, I will probably return to this option, but for now, SATA RAID as a storage system is fine with me.
When I decided on the disk subsystem, I came to a combination of 2 x Kingston 240 Gb RAID1 "/" SSD + 4 x HDD Seagate 1 Tb RAID5 "/ home" + M.2 SSD Samsung 960 EVO 250 Gb "swap" it was time to continue my experiments with GPU I already had Tesla, and the Australian cooler arrived with an “evil” snail that eats as much as 2.94A for 12V, the second slot was occupied by M.2 and for the third I borrowed GT 610 “for experiments”.

Here in the photo all 3 devices are connected, and the M.2 SSD via the flexible ThermalTek riser for video cards that works without errors on the 3.0 bus. It’s like that, out of many separate “ribbons” similar to those of which SATA cables are made. PCI-E 16x risers made from a monolithic flat cable, like those of old IDE-SCSI - into the furnace, they torment us with errors due to mutual interference. And as I already said, the Chinese are now also making such risers as Thermaltek’s, but shorter.
In combination with the Tesla K20 + GT 610, I tried a lot of things, at the same time finding out that when connecting an external video card and switching the output to it, vKVM does not work in the BIOS, which did not really upset me. Anyway, I did not plan to use external video on this system, there are no video outputs on Tesla, and the remote admin panel via SSH and without X-owls works fine when you slightly remember what a command line without a GUI is. But IPMI + vKVM greatly simplifies management, reinstallation and other things with a remote server.
In general, this IPMI board is gorgeous. A separate 100 Mbps port, the ability to reconfigure packet injection on one of the 10 Gbps ports, a built-in Web server for power management and server control, downloading the vKVM Java client and the client for remotely mounting disks or images to reinstall directly from it ... The only thing is that the clients are under the old Java Oraklovskaya, which is no longer supported on Linux and for the remote admin area, had to start a laptop with Win XP SP3 with this ancient Toad. Well, the client is slow, for the admin panel and all that’s enough, but you won’t play games remotely, the FPS is small. Yes, and ASPEED video that is integrated with IPMI is weak, only VGA.
In the process of dealing with the server, I learned a lot and learned a lot in the field of professional server hardware from Dell. What I do not regret at all, as well as the time and money spent usefully. The continuation of the informative story about the actual assembly of the framework with all server components will be later.
Link to part 3: habr.com/en/post/454480
Link to the first part of the story for those who missed it.
I want to continue my story about the assembly of the "village supercomputer." And I will explain why it is so named - the reason is simple. I myself live in a village. And the name is light trolling over those who shout on the Internet “There is no life beyond the MKAD!”, “The Russian village has drunk and is dying!” So, somewhere it may be so, and I will be an exception to the rule. I don’t drink, don’t smoke, I do things that not every “city creak (c)” has in mind and pocket. But back to our sheep, or rather, the server, which at the end of the first part of the article already "showed signs of life."
The board lay on the table, I climbed through the BIOS to configure it to my taste, dashed Ubuntu 16.04 Desktop for simplicity and decided to connect a video card to the "super-machine". But at hand was only the GTS 250 with a hefty hefty fan sticking in. Which I installed in the PCI-E 16x slot near the power button.


"I shot on a pack of Belomor (s)" so please do not kick for the quality of the photo. I’d better comment on what is captured on them.
Firstly - it turned out that when installed in the slot - even a short video card rests on the memory card slots, which in this case cannot be installed and you even have to lower the latches. Secondly, the iron mounting plate of the video card closes the power button, so it had to be removed. By the way, the power button itself is illuminated by a two-color LED that lights green when everything is in order and blinks orange if there are any problems, short circuit and the power supply protection has tripped or + 12VSB power is too high or too low.
In fact, this motherboard is not designed to include video cards “directly” in its PCI-E 16x slots, they all connect on risers. To install the expansion card in the slots near the power button, there are angular risers, low for installing short cards up to the first processor heat sink, and high angular with an additional + 12V connector for installing the video card "above" the standard low 1U cooler. It can include large video cards like the GTX 780, GTX 980, GTX 1080, or specialized Nvidia Tesla K10-K20-K40 GPGPU cards, or Intel Xeon Phi 5110p "computing cards" and the like.
But in the GPGPU riser, the card included in EdgeSlot can be turned on directly, only again by connecting additional power with the same connector as on the high angular riser. Who cares - on eBay this flexible riser is called "Dell PowerEdge C8220X PCI-E GPGPU DJC89" and costs about 2.5-3 thousand rubles. Angled risers with additional power supply are much rarer and I had to agree to buy them from a specialized server parts store through Shopotam. They went out at 7 thousand apiece.
I’ll say right away that “risky guys (tm)” can even connect a pair of GTX 980 to the board with 16x Chinese flexible risers as one person did on “Tom Forum itself”, by the way, the Chinese are doing quite good work on PCI-E 16x 2.0 crafts in the style of ThermalTech flexible risers, but if you get burned out one day the power circuits on the server board - you will only blame yourself. I did not risk expensive equipment and used original risers with additional power and one Chinese flexible one, considering that connecting one card “directly” would not burn the board.
Then the long-awaited connectors for connecting additional power arrived and I made a tail for my riser in EdgeSlot. And the same connector, but with a different pinout, is used to supply additional power to the motherboard. This connector is just near this EdgeSlot connector, there is an interesting pinout. If on the riser there are 2 wires +12 and 2 common, then on the board there are 3 wires +12 and 1 common.

That's actually the same GTS 250 is included in the GPGPU-riser. By the way, additional power is taken from me on the risers and the motherboard - from the second power connector + 12V CPU of my PSU. I decided that this would be done more correctly.
The fairy tale quickly affects, but slowly sending to Russia from China and other places of the globe travel. Therefore, in the assembly of the "supercomputer" there were large gaps. But finally, the server Nvidia Tesla K20M with a passive radiator came to me. Moreover, it is completely zero, from storage, sealed in its own box, in its own package, with guarantee papers. And the suffering began how to cool her?
First, a custom cooler with two small “turbines” was bought from England, here it is in the photo, with a homemade cardboard diffuser.



And they were full of crap. They made a lot of noise, the mount did not fit at all, they blew weakly and gave such a vibration that I was afraid that the components would fall off the Tesla board! Why they were sent to the trash almost immediately.
By the way, in the photo near Tesla you can see LGA 2011 1U server copper heatsinks installed on processors with a snail from Coolerserver purchased from Aliexpress. Very worthy though noisy coolers. Perfect fit.
But actually while I was waiting for a new cooler for Tesla, this time ordering from Australia a large snail BFB1012EN with a mount printed on a 3D printer, it came to the server data storage system. On the server board there is a mini-SAS connector through which 4 SATA and 2 more SATA are connected with regular connectors. All SATA standard 2.0 but it suits me.
The integrated intel RAID C602 in the chipset is not bad, and most importantly it misses the TRIM command for SSDs, which many low-cost external RAID controllers do not know.
On eVau I bought a meter-long mini-SAS to 4 SATA cable, and on Avito I bought a 5.25 "hot-swap basket with a 4 x 2.5" SAS-SATA cable. So when the cable and the basket arrived, 4 terabyte Seigates were installed in it, the RAID5 was built into 4 devices in the BIOS, I started installing the server Ubuntu ... and I came across the fact that the disk partitioning program did not allow the swap partition to be made on the raid.
I solved the problem head on - I bought the ASUS HYPER M.2 x 4 MINI and M.2 SSD Samsung 960 EVO 250 Gb adapter in DNS PCI-E to M.2, decide that you need to select the fastest device for swap, since the system will work with high computational load, and the memory is still obviously less than the size of the data. Yes, and 256 GB memory was more expensive than this SSD.

Here is the same adapter with an installed SSD in a low angle riser.
Anticipating the questions - “Why not make the entire system on M.2 and have a maximum access speed higher than that of the SATA raid?” - I will answer. Firstly, for 1 or more TB M2 SSDs are too expensive for me. Secondly - even after updating the BIOS to the latest version 2.8.1, the server still does not support loading devices into M.2 NVE. I did the experience when the system installed / boot on USB FLASH 64 Gb and everything else on the M.2 SSD, but I did not like it. Although, in principle, such a bundle is quite functional. If the M.2 NVEs of large capacity become cheaper, I will probably return to this option, but for now, SATA RAID as a storage system is fine with me.
When I decided on the disk subsystem, I came to a combination of 2 x Kingston 240 Gb RAID1 "/" SSD + 4 x HDD Seagate 1 Tb RAID5 "/ home" + M.2 SSD Samsung 960 EVO 250 Gb "swap" it was time to continue my experiments with GPU I already had Tesla, and the Australian cooler arrived with an “evil” snail that eats as much as 2.94A for 12V, the second slot was occupied by M.2 and for the third I borrowed GT 610 “for experiments”.

Here in the photo all 3 devices are connected, and the M.2 SSD via the flexible ThermalTek riser for video cards that works without errors on the 3.0 bus. It’s like that, out of many separate “ribbons” similar to those of which SATA cables are made. PCI-E 16x risers made from a monolithic flat cable, like those of old IDE-SCSI - into the furnace, they torment us with errors due to mutual interference. And as I already said, the Chinese are now also making such risers as Thermaltek’s, but shorter.
In combination with the Tesla K20 + GT 610, I tried a lot of things, at the same time finding out that when connecting an external video card and switching the output to it, vKVM does not work in the BIOS, which did not really upset me. Anyway, I did not plan to use external video on this system, there are no video outputs on Tesla, and the remote admin panel via SSH and without X-owls works fine when you slightly remember what a command line without a GUI is. But IPMI + vKVM greatly simplifies management, reinstallation and other things with a remote server.
In general, this IPMI board is gorgeous. A separate 100 Mbps port, the ability to reconfigure packet injection on one of the 10 Gbps ports, a built-in Web server for power management and server control, downloading the vKVM Java client and the client for remotely mounting disks or images to reinstall directly from it ... The only thing is that the clients are under the old Java Oraklovskaya, which is no longer supported on Linux and for the remote admin area, had to start a laptop with Win XP SP3 with this ancient Toad. Well, the client is slow, for the admin panel and all that’s enough, but you won’t play games remotely, the FPS is small. Yes, and ASPEED video that is integrated with IPMI is weak, only VGA.
In the process of dealing with the server, I learned a lot and learned a lot in the field of professional server hardware from Dell. What I do not regret at all, as well as the time and money spent usefully. The continuation of the informative story about the actual assembly of the framework with all server components will be later.
Link to part 3: habr.com/en/post/454480