Fujitsu Enhances Array Protection and Quality Management

    At the end of July, Fujitsu introduced the new functionality of its ETERNUS DX S3 disk arrays. These storage enhancements provide an additional layer of data protection against disk failures and major crashes and simplify response time control for applications with different priorities.



    The traditional approach to fault tolerance in a sixth-level RAID array in ETERNUS DX uses the allocation of one of the array disks as a “hot spare”. The main purpose of this backup disk is to write to it the contents of the failed disk restored using RAID mechanisms. After the recording is completed, the failed disk can be replaced with a new one and the recovered data from the hot spare disk can be transferred to it. After performing all these data recovery operations and writing them first to a hot spare disk and then to a new disk, the RAID array will again be protected from hardware failures due to RAID redundancy.

    The problem is that as the capacity of modern hard drives grows, the time required to overwrite all the data from the failed drive to the hot spare drive increases, so this operation can take many hours to use with terabyte drives. As a result, when recovering data from a failed disk, the fault tolerance of the array is reduced, because if another disk fails in the array before writing the recovered data to the hot spare disk, some or all of the data stored on the array may be irretrievably lost.

    Introduced at the end of July, the ETERNUS Fast Recovery feature, instead of a separate hot spare disk, uses the Fast Hot Spare (FHS) allocation for hot backup on all RAID disks. Due to such a distribution of the hot spare over the array disks during recovery of the contents of the failed disk, write operations are performed in parallel to tens or even hundreds of drives, so the procedure is much faster. For example, when using a RAID-6 array with one-terabyte disks, it accelerates six times - from nine to one and a half hours.

    Thus, when using Fast Recovery, the duration of the recovery procedure for a failed disk is significantly reduced, and with it the risk of data loss if another disk fails.



    The second new feature of ETERNUS storage systems - ETERNUS Storage Cluster - ensures the continuous availability of information stored on the array, and even in the event of a major accident that can damage not only the array itself, but the entire data center, the data will remain intact. The ETERNUS Storage Cluster technology, which uses the REC (Remote Equivalent Copy) synchronous remote “equivalent” copy mechanism, well-known to owners of ETERNUS DX family of storage systems, implements a transparent procedure for business applications that processes failures with switching (failover) to another storage system in case of a failure in the main SHD or its shutdown for scheduled maintenance.

    The system administrator can configure the automatic start of switching to the backup system in the event of a failure or manually start it before servicing the array (for example, when updating the microcode) or in case of such minor failures as a planned outage. Thanks to real-time mirroring between the primary and backup arrays and transparent switching, downtime of business processes is almost completely eliminated due to temporary unavailability of the storage system. This technology is supported throughout the ETERNUS DX S3 line, including the younger models DX100 S3 and DX200 S3.



    To build a Storage Cluster, in addition to the two arrays connected via Fiber Channel, a server is required that performs the functions of the Storage Cluster Controller. In normal cluster operation mode, data from the main array is continuously copied to the backup using synchronous REC, and the controller constantly monitors the state of the main array. If the controller detects a failure on the main array, it will start the failover procedure with the transfer to the backup array of the basic parameters of the main array (LUN numbers, WWN addresses, etc.), so the switching will be transparent for application servers that store their data on main array.



    Storage Cluster can be used in two scenarios - a cluster can consist of arrays installed in one data center, and arrays spaced up to 100 kilometers away to protect against major disasters. Unlike similar disaster solutions, Storage Cluster does not use a virtualizing set-top box to mirror data between arrays, which complicates management and maintenance.

    Consolidated storage of large volumes of data on one system can lead to competition for storage I / O resources between various servers and their business applications. The Auto QoS function (automatic maintenance of quality of service), implemented in the new version of ETERNUS SF V16.1 control software, allows achieving guaranteed response time for the most important business applications even in situations where the storage system is operating at its maximum capacity due to high load . Using Auto QoS, the system administrator can select one of three priority levels for each application (high, medium, or low) and set a target response time (Target Response Time) for the data volumes allocated to specific applications.



    ETERNUS SF monitors volume I / O performance and dynamically adjusts them to ensure compliance with Target Response Time. If the response time exceeds the target value, then additional resources of the input / output subsystem are automatically allocated to the data or it is moved to the storage level with a higher data access speed, for example, to solid-state drives SSD. Auto QoS, unlike traditional QoS mechanisms, does not require the system administrator to perform complex calculations of input / output performance (IOPS) to ensure the required quality of storage service.

    The new functionality of ETENUS DX S3 will be especially demanded by business IT directors, which require the highest possible level of data availability and effective quality management of storage services for arrays of critical business data.

    Also popular now: