ETegro Fastor FS200 G3– fault-tolerant storage server



    Today we want to tell you about another product of our production: storage system ETegro Fastor FS200 G3. We believe that its main and extremely attractive feature is that it is a fault-tolerant storage server built on a two-node cluster running Windows Storage Server 2012.



    Before talking about the device, we’ll try to tell you why we created such a solution and why, in our opinion, it is necessary. To build a data warehouse with a high degree of fault tolerance. A solution that provides not only reliability of storage, but also high availability. To achieve this, we developed a product based on a cluster of two nodes interacting with shared disk space. And all this is combined in one case. That is, in the storage system by default there are two controllers operating in Active-Active mode and the time for transferring functionality from one controller to another in case of failure is extremely short. It is unlikely that any other solution will be able to equally successfully implement a fault-tolerant file server in conditions where even a 5-minute simple “restart” leads to significant financial losses, and will remain within a reasonable budget. Moreover, if you don’t have enough disk space on your “native” 12 disks, you can always expand it by installing additional Fastor JS300 G3 / JS200 G3 disk shelves, with which you can bring the number of disks to 132. An interesting feature of this storage system is that it can simultaneously be both NAS (SMB 3.0 and NFS 4.1) and SAN.

    Before moving on to the device itself, we will answer another question that will probably be asked. Why is the ETegro Fastor FS200 G3 better than a storage system assembled on a cluster basis from separate servers and disk shelves? The answer is simple and consists of two points. Firstly, it is more compact since it occupies only 2U in a rack. Secondly, it is cheaper, simply because it needs less iron. Well, the bonus is a significant reduction in the number of wires in the rack))

    Well, finish the verbiage (no one likes to read the large walls of the text) and go look at the newcomer in our assortment.



    At first glance - nothing special, a standard 2U server. More recently, we wrote about a server with 4 handles- now the time has come for a server with one handle. Joke. The green handle in the center of the lid is not designed to carry the entire server at all, but only allows you to comfortably open the central cover to replace the fans, if you suddenly need it.







    The entire front panel is occupied by a basket of 12 hard drives. As expected in a decent society of modern servers, they have a hot connection and their own display. Yes, these are the same basic 12 disks that are the total disk space for the entire cluster as a whole and are connected immediately to both nodes via a duplicated input / output system (Multipath I / O).

    Particularly attentive will even be able to discern the name of the manufacturer of the hard drives used in this demo fully functional server model.

    But at the edges of the disk basket, two sets of LEDs were located at once, which, with their heads, give out the entire fault-tolerant essence of this server. Because each group displays the current state of an individual cluster node.



    The back panel in all its glory demonstrates two separate absolutely identical platforms, pressed against the side panels of the case, on each of which is located on the server, which is the node of the cluster. Well, between them there was a place just for two power supplies.





    Naturally, all this also exists on the principles of hot swapping and for extraction and installation operations not only does not require tools, but is equipped with special convenient handles.

    Duplicate power supplies are exactly the same as in all our other servers, this greatly facilitates the issue of both finding a suitable replacement and storing the military stock for those whose IT infrastructure is built primarily on ETegro equipment. In FS200 G3, we use 1100 W units - this is enough for a server full of disks to the maximum.





    The server blocks themselves are parallelepipeds very densely packed with iron and silicon. In their capabilities, they are comparable to traditional modern 1U-servers. Judge for yourself, each node carries an Intel C602 chipset with two Intel Xeon E5-2600 processors installed and up to 512 GB of RAM.



    Each node provides seating for two 2.5-inch drives, which are installed on the operating system of each node. And in this case, we use the SSD ourselves and recommend that everyone else do the same, because if you install disks, the reaction speed of the system becomes very uncomfortable.



    As for expansion cards, it all depends on what you need in your case. In existing PCI-E slots, you can install SAS RAID or HBA controllers or 10G Ethernet, 40G Ethernet or 56G FDR Infiniband controllers - it all depends on what existing infrastructure you will enter this storage system into.



    But many will have enough of those connectors that are already present on the rear panel of each node. In addition to traditional VGA, RS232, 2 USB and RJ45 for server management, here you can find two 10G Ethernet ports (on Intel X540) with RJ-45 connectors and a miniSAS port for connecting expansion disk shelves.

    Immediately, we mention that there is also a third network interface, the gigabit Intel i350AM2, but it is used for internal connection of cluster nodes.

    For dessert, we left a paragraph relating to religious wars for operating systems. Traditionally, we are quite tolerant of customers' choice of the operating system - our servers work fine both on the OS from the Windows camp and on the OS with the * NIX kernel. But in this particular case, the choice of our specialists is very clear: this storage server carries Windows Server 2012. And the point is not that this server is certified for Microsoft Windows Server 2012 and is presented in the corresponding list. Everything is much simpler: at the current level of OS development, this particular option is the only one that supports cluster work directly with SAS devices, allowing us to sell a really out-of-the-box cluster, while any other options require software on-site processing with a file regarding the use of midplay. Of course, there is also an option for LSI Syncro, but it requires a separate consideration.

    In its current form, the server supports access to data using the NFS 4.1 and SMB 3.0 protocols. Fastor FS200 G3 also supports fault tolerant block data access via iSCSI SAN.

    As always, we are happy to hear your thoughts and suggestions or reasonable criticism (I would not want, of course, nobody likes criticism, but it’s useful to hear reasonable).

    Also popular now: