Fujitsu ETERNUS DX 100 S3 Storage Review and Test



    Today we’ll talk about Fujitsu’s scalable entry-level DX 100 S3 storage system. Moreover, the emphasis on scalability and unification is made for a reason. The fact is that this system can be expanded to older models by simply replacing controller blocks and has the ability to provide both block (via Fiber Channel, FCoE, iSCSI, SAS and InfiniBand) and file access to data (NFS), without which any additional devices (filers). Although it should be noted that there is an opportunity to save money: if file access is not required, you can purchase a version that will not have a board responsible for file access in the controller unit by default.

    This system belongs to the updated ETERNUS S3 line, which received the SAS3 interface (12 Gbit SAS), which doubled its throughput. And VxWorks OS (developed by Wind River Systems), which runs storage, is a multi-threaded 64-bit real-time operating system that provides high performance and security. A similar operating system is used in the NASA space program, in particular, under its control, the Curiosity apparatus was landing on the surface of Mars.

    Key features and benefits of ETERNUS DX 100 S3 storage:
    • A reserve of performance and capacity that provides data integration for virtualization of servers, email, databases and commercial applications in one system
    • Support for high data growth at low cost and investment protection thanks to a high degree of expandability within the system and the possibility of moving to more powerful systems
    • Extensive ETERNUS SF Express Management Software Suite Included
    • Disaster Recovery Flexibility
    • Built-in smart snapshot features for backup
    • Convenient and cost-effective operation with minimal administration effort
    • The use of a unified system of access to disk drives SAN and NAS, contributing to a quick payback
    • Reduced operating costs due to the unification of control functions in all systems of this family
    • A wide range and various combinations of network types are supported, as well as direct connection to the server
    • Supports various combinations of 2.5-inch and 3.5-inch drives


    Specifications:
    • Interfaces - Fiber Channel (up to 16 Gbit / s), FCoE (10 Gbit / s), iSCSI (10/1 Gbit / s), SAS (12/6/3 Gbit / s), CIFS, NFS
    • Disk devices (maximum) - 144 x SSD, SAS, and Nearline SAS in mixed configurations (3.5 inches / 2.5 inches)
    • Capacity (maximum) - 576 TB
    • Number of Host Interfaces - 4
    • Number of connected host ports - max. 1024
    • RAID level - 0, 1, 1 + 0, 5, 5 + 0, 6


    Testing system configuration:
    • Fujitsu ETERNUS DX 100S3 storage with block access, FC 8 Gbit switching.
    • 10 SAS 900 GB / 10K rpm drives - RAID 10
    • Server Fujitsu PRIMERGY RX200 S8, Intel Xeon E5-2660, 8x8Gb RAM, 2x QLogic FC HBA
    • ESXi 5.5 + vm Windows Server 2012R2 + LUN 100GB




    From the point of view of the physical performance of the device, there is nothing to complain about, but it will not work to highlight anything special either. ETERNUS DX 100 S3 is a solid system made of high-quality materials, with a design already established among all manufacturers, dictated by rack-mount performance.

    The graphical storage management interface is ascetic and does not stand out for anything special; these are the same top tabs with horizontal tree-like sub-items and parameter selection tables. The weakest side of this interface is the implementation of performance monitoring, it seems to be there, but it’s more likely “for show”. The first thing that annoys is that monitoring must be turned on necessarily, otherwise the readings will be zero, and it will turn on in a completely different menu, so you won’t find it right away. But even after switching on, we get a non-customizable list of indications, without any graphs or visualization, however, with the ability to save the log for a certain period of time. But all the ugliness of the GUI is compensated without exaggeration by the number of settings outstanding in this segment of storage systems. For instance, flexible configuration of mapping, both by port and by port-groups, LUN-groups and hosts. It is possible to select the mode of operation with the host, namely: we can choose according to which scheme the asymmetric access to the ALUA logical drive (Asymmetric Logical Unit Access) will work: Active-Active, Active-Passive or Preferred Path. We can also specify how much cache will be available to a particular logical drive and much more.







    Based on the manufacturer’s statements that this system is a full-fledged Active-Active device, we can expect data processing in parallel by two controller units, regardless of the controller “owner” of the logical drive, which was confirmed during testing of the system. Choosing ALUA Active-Active mode on the storage side and applying the Round Robin multipassing policy in the ESXi settings for the drive presented to the DX 100 S3 hypervisor, we got two active channels and load balancing on both controllers. This can be seen on the ESXi monitoring graph and the readings of the uniform CPU load of both storage controllers when writing the test file with the IOmeter program.



    On the graph you can see that the total speed of the recorded data approached 500,000 kilobytes per second, passing through adapters (FC HBA - vmhba 1 and 2) connected to different controllers, 250 000 Kb / s each.
    Below are the average test results of various utilities running in the virtual server Windows Server 2012, deployed in a virtualization environment VMware ESXi 5.5

    AnvilBenchmark








    These test results are not unambiguous, Anvil and CrystalDiskMark utilities reflected very good performance indicators for this segment, and IOmeter, on the contrary, is nothing outstanding. This is due to the fact that the STANDARD synthetic tests Anvil and CrystalDiskMark have specified parameters designed more for testing disks (SSD, HHD) than storage systems and mainly reflect the speed of the disk subsystem taking into account the fast cache (in our case, 8 GB, according to 4 GB per controller): the more cache and disks, the more I / O operations we get. IOmeter allows you to more flexibly load the system, overflow the cache, and increase the queue of requests. And to achieve maximum results close to the real load, you need to know exactly what we want from the storage system is how many IOPS (Input / Output Operations Per Second) we need (for this we collect the required number of disks, otherwise the system will simply have nowhere to get them from), what percentage will be read and what percentage will be written, how many approximately simultaneous requests will come (queue depth) and so on. And here the ability to flexibly configure the Fujitsu ETERNUS DX will come in handy. But even if you use it “as is”, with default parameters, we will get the maximum capabilities multiplied by the modern reliable software and hardware component. And here the ability to flexibly configure the Fujitsu ETERNUS DX will come in handy. But even if you use it “as is”, with default parameters, we will get the maximum capabilities multiplied by the modern reliable software and hardware component. And here the ability to flexibly configure the Fujitsu ETERNUS DX will come in handy. But even if you use it “as is”, with default parameters, we will get the maximum capabilities multiplied by the modern reliable software and hardware component.

    To summarize, it is worth noting two main factors that shape the opinion of the Fujitsu ETERNUS DX 100 S3. The first is the ease of commissioning (no errors, problems or difficulties with setting up and connecting, everything is clear and everything works), the second is the flexibility of the system (this is a lot of settings, and the ability to get both block and file access, as well as an upgrade to more productive storage system by simply replacing controllers).

    Prepared by the material of Tretyakov Vyacheslav, an engineer at Paradigma. See the full article here .

    Also popular now: