SSD caching implementation in QSAN XCubeSAN storage

    Productivity technologies based on the use of SSD and widely used in storage systems, have long been invented. First of all, it is the use of SSD as storage space, which is 100% efficient, but expensive. Therefore, in the course are the technology of tiring and caching, where SSD is used only for the most popular ("hot") data. Tiring is good for long-term (day-to-week) usage scenarios. And caching, on the contrary, for short-term (minutes-hours) use. Both of these options are implemented in the QSAN XCubeSAN storage system . In this article we will look at the implementation of the second algorithm - SSD caching .




    The essence of SSD caching technology is the use of SSD as an intermediate cache between hard drives and controller RAM. SSD performance, of course, is lower than the performance of the controller's own cache, but the volume is much higher. Therefore, we get some compromise between speed and volume.


    Indications for using SSD cache for reading:


    • The predominance of read operations on write operations (most often characteristic of databases and web applications);
    • The presence of a bottleneck in the form of the performance of an array of hard drives;
    • The volume of the requested data is less than the SSD cache.

    The indications for using the SSD cache for reading + writing are the same reasons, except for the nature of the operations — a mixed type (for example, a file server).


    Most storage vendors use read-only SSD caches in their products. The principal difference in QSAN from them is the ability to use the cache also for writing. To activate the SSD caching functionality in the QSAN storage system, a separate license is required (supplied in electronic form).


    SSD cache in XCubeSAN is physically implemented as separate SSD cache pools. There can be up to four of them in the system. Each pool, of course, uses its own set of SSDs. And already in the properties of the virtual disk, we determine whether it will use the cache pool and which one. Enabling and disabling use of cache for volumes can be performed online without stopping I / O. Also on the "hot" SSD can be added to the pool and remove them from there. When creating a pool of SSD cache, you need to choose in which mode it will work: read only or read + write. His physical organization depends on it. Once the cache of pools can be several, then the functionality of them can be different (that is, the system can have both pools of cache for reading and for reading + writing).


    In the case of using a read-only pool cache, it can consist of 1-8 SSDs. The disks do not have to be of the same size and one vendor, as they are combined into the NRAID + structure. All SSDs in the pool are shared. The system tries to parallelize incoming requests between all SSDs in order to achieve maximum performance. In case of failure of one of the SSDs, nothing terrible will happen: after all, the cache contains only a copy of the data stored on the array of hard drives. Just the amount of available SSD cache decreases (or becomes zero if you use the original SSD cache from one drive).



    If the cache is used for read + write operations, then the number of SSDs in the pool should be a multiple of two, since the contents are mirrored on pairs of drives (the NRAID 1+ structure is used). Duplication of the cache is necessary due to the fact that it may contain data that have not yet had time to register on hard drives. In this case, the failure of the SSD from the cache pool would lead to loss of information. In the case of NRAID 1+, the failure of the SSD will simply lead to the transfer of the cache to the read-only state with the dumping of unwritten data onto an array of hard drives. After replacing the faulty SSD, the cache will return to its original mode of operation. By the way, for greater security, the cache that is working on read + write, you can assign a dedicated hot spare.



    When using the SSD caching feature in XCubeSAN, there are a number of storage controller memory requirements: the larger the system memory, the larger the pool cache will be available.



    In contrast, again from most storage vendors, which offer only the enable / disable option for configuring SSD cache, QSAN provides more options. In particular, you can select the cache mode depending on the nature of the load. There are three preset templates that are closest in their work to the corresponding services: a database, a file system, a web service. In addition, the administrator can create his own profile by setting the required parameter values:


    • Cache Block Size - 1/2/4 MB
    • The number of requests for reading a block to be copied to the cache (Populate-on-Read Threshold) is 1..4
    • The number of requests to write a block so that it was copied to the cache (Populate-on-Write Threshold) is 0..4


    Profiles can be changed "on the fly", but, of course, with zeroing the contents of the cache and its new "warming up".


    Considering the principle of operation of the SSD cache, you can select the main operations when working with it:







    Read data when not in cache


    1. The request from the host goes to the controller;
    2. Since the requested ones are not in the SSD cache, they are read from the hard drives;
    3. The read data is sent to the host. At the same time, it is checked whether these blocks are “hot”;
    4. If so, they are copied to the SSD cache for future use.





    Read data when they are in cache


    1. The request from the host goes to the controller;
    2. Since the requested data is in the SSD cache, they are read from there;
    3. The read data is sent to the host.





    Write data when using cache for reading


    1. A write request from the host goes to the controller;
    2. Data is written to hard drives;
    3. A successful write response is returned to the host;
    4. At the same time, it is checked whether the block is “hot” (the parameter Populate-on-Write Threshold is compared). If so, it is copied to the SSD cache for later use.





    Write data when using cache for read + write


    1. A write request from the host goes to the controller;
    2. Data is recorded in the SSD cache;
    3. A successful write response is returned to the host;
    4. Data from the SSD cache in the background is written to hard drives;



    Checking in


    Test stand

    2 servers (CPU: 2 x Xeon E5-2620v3 2.4Hz / RAM: 32GB) are connected via two ports via Fiber Channel 16G directly to the XCubeSAN XS5224D storage system (16GB RAM / controller).


    Использовались 16 x Seagate Constellation ES, ST500NM0001, 500GB, SAS 6Gb/s, объединенные в RAID5 (15+1), для массива с данными и 8 x HGST Ultrastar SSD800MH.B, HUSMH8010BSS200, 100GB, SAS 12Gb/s в качестве кэша


    Были созданы 2 тома: по одному для каждого сервера.



    Test 1. SSD cache is read only from 1-8 SSD


    SSD Cache


    • I / O Type: Customization
    • Cache Block Size: 4MB
    • Populate-on-read Threshold: 1
    • Populate-on-write Threshold: 0

    I / o pattern


    • Tool: IOmeter V1.1.0
    • Workers: 1
    • Outstanding (Queue Depth): 128
    • Access Specifications: 4KB, 100% Read, 100% Random



    In theory, the larger the SSD in the cache pool, the better the performance. In practice, this was confirmed. The only significant increase in the number of SSD with a small number of volumes does not lead to an explosive effect.


    Test 2. SSD cache in read + write mode with 2-8 SSD


    SSD Cache


    • I / O Type: Customization
    • Cache Block Size: 4MB
    • Populate-on-read Threshold: 1
    • Populate-on-write Threshold: 1

    I / o pattern


    • Tool: IOmeter V1.1.0
    • Workers: 1
    • Outstanding (Queue Depth): 128
    • Access Specifications: 4KB, 100% Write, 100% Random



    The same result: explosive growth in performance and scaling with an increase in the number of SSDs.


    In both tests, the amount of working data was less than the total cache. Therefore, over time, all blocks were copied to the cache. And the work has, in fact, been done with the SSD, almost without affecting the hard drives. The purpose of these tests was to visually show the effectiveness of warming up the cache and scaling its performance depending on the number of SSDs.


    Now we will return from heaven to earth and check the more vital situation when the amount of data is larger than the cache size. In order for the test to take place in sane time (the time it takes for the cache to warm up significantly increases as the size of the volume increases), we limit ourselves to the size of the 120GB volume.


    Test 3. Database emulation


    SSD Cache


    • I / O Type: Database
    • Cache Block Size: 1MB
    • Populate-on-read Threshold: 2
    • Populate-on-write Threshold: 1

    I / o pattern


    • Tool: IOmeter V1.1.0
    • Workers: 1
    • Outstanding (Queue Depth): 128
    • Access Specifications: 8KB, 67% Read, 100% Random


    Verdict


    As an obvious conclusion, of course, it suggests a good efficiency of using SSD cache to improve the performance of any storage system. With regard to QSAN XCubeSAN, this statement applies fully: SSD caching is excellently implemented. This concerns the support of read and read + write modes, flexible work settings for any use cases, as well as the final performance of the system as a whole. Therefore, for a very reasonable cost (the price of the license is commensurate with the cost of 1-2 SSD), you can significantly increase the overall performance.

    Also popular now: