Quick setup of storage Aerodisk Engine
We continue to acquaint you with the Russian data storage systems AERODISK ENGINE N-series. The previous introductory article is here . Also, the guys got their own YouTube channel with training videos on setting up and working with the system. And even before the new year, Aerodisk launched a promotional program , in which you can buy storage systems at a discount of up to 60%! The offer, in our opinion, is excellent.
This time Aerodisk provided us with the ENGINE N2 storage system in All-flash configuration for self-study and configuration, and we will share this experience.
In the framework of our acquaintance with ENGINE we will make a cycle of 3 articles:
- Basic setting
- Crash tests
- Load tests
In this article, we will perform the basic configuration of the storage system: we will present the LUNs and file balls to the host, as well as evaluate the usability of the management interface. Before that, we had a one-day training course on how to work with the system and read the documentation.
So, what we have:
- Dual-Controller Storage AERODISK ENGINE N2 with FC-8G and Ethernet 10G Adapters
- 16 SSD disks
- 8 HDD drives
- Physical server with Windows 2012, which is connected via SAN switches (FC and Ethernet) to the storage system
- Working documentation for storage systems, as well as bright heads and direct hands of our engineers.
A reasonable question is, why are there HDDs, because now in the All-Flash trend? The fact is that tasks for hybrid storage (SSD + HDD) both arose and continue to arise, so we asked Aerodisk to add the minimum number of HDD disks to the flash storage to check the functionality of the hybrid groups. Now we will configure the storage, and in the next article we will do a big performance test.
In our hands we found this box. According to the manufacturer, it has 40 TB with a capacity of 300,000 IOPS. It sounds intriguing, we will check.
We unpack and see the following:
In our opinion, everything is done conveniently, the body contains tips in English and Russian: how it is possible and how not to do it. The presence of the Russian language, of course, pleases.
In front we see slots for 24 disks, behind - modular controllers and power supplies. The controllers are equipped with FC ports, Ethernet ports (normal RJ-45 and 10 Gigabit on optics), as well as SAS ports for connecting disk shelves. The fact that all types of popular I / O ports are in one box is a definite plus. Everything has been duplicated, which means it can change to hot, and therefore there shouldn't be any problems with working in non-stop mode. But we will check.
Included with the storage are more rails and a technical passport, which, among other things, specifies the IP for connecting to the storage controllers, as well as the administrator password.
We mount the storage system in the rack, connect it to the server via switches (both FC and Ethernet), turn on the storage system and start the configuration. We can connect via the command line via SSH or Web. We'll deal with the command line later, immediately go to the web interface:
On the dashboard, we see the total current load on the two controllers, the state of the cluster and the sensors. At the left - the main menu, in the upper right - the logon menu, in the same place we set the time and change the password. At the top left, a useful information panel that displays the “health” status of various storage components. If something is wrong, you can immediately click on the problem, and the system itself will send you to the desired menu. Bottom log, which displays the latest operations.
In general, everything is convenient and logical. We proceed to configure the storage.
Configuring Storage Groups
According to documentation, ENGINE can be given out according to the following protocols:
- FC and iSCSI (block)
- NFSv4 and SMBv3 (files)
There are, of course, FTP and AFP, but this is, in our opinion, exotic, and this article will not be considered (but if you really need to, write, try, tell).
We have two types of disk groups: RDG, which can give blocks and files, and DDP, which can give only blocks (and is specially tuned for it). In our last article on Aerodisk , a detailed description and scenarios of the use of RDG and DDP were given. Since the RDG is more packed with useful functions, we will customize it. We will return to DDP in the next article, when it will be necessary to test various performance scenarios.
Create RDG Storage Group
We make a hybrid group of 4 SSD drives (2 under the cache, 2 under the tearing with a RAID-10 level and 7 HDD drives with a RAID-6P level (triple parity). As a result, we get a fast "upper" level on the SSD and a slow, but very reliable "lower" level of HDD.
The process of creating a group of questions did not cause, it consists of two stages, at the beginning the main "lower" is created, and then the "upper" levels are covered. In the course of creation, you can enable deduplication and compression (enable ). We are also immediately warned about how many disks of autochange will be left for non-standard screens. We leave one disk for autochange to test this mechanism.
After creating, we see the “skeleton” of our Reid group. It looks visually and conveniently:
Also, after creating a group, you can add disks to any of the levels in the special menu:
The group is created. In the properties of the group itself there are tabs with LUNs and balls:
From there they also went to create LUNs. In the process of creating a LUN, we are offered various options. Among the obviously useful ones, we can note the possibility of creating a “thin” LUN, its block size for a specific LUN (very useful for various types of load) and the ability to enable or disable deduplication and compression for each LUN separately. We make “thin” LUN with dedup and compression. LUN is created:
With the created LUN, you can do many different operations. After we give the LUN to the server, we check them.
Now create file resources. The process of creating NFS and SMB is not much different from creating a LUN, you can also select an individual block, “thinness” or “thickness”, but there is a difference. It is impossible to set individual activation of deduplication and compression to a file share, that is, the setting will be taken from the parent object. Thus, if we want deduplication to work on file balls and compression, this must be enabled at the RDG level. In principle, this is OK, but less flexible than with LUNs.
Also, a separate topic is the setting of access to file resources. For NFS, access control (for reading and / or writing) by IP addresses and / or users is provided.
For SMB, local user creation and integration with Active Directory is provided. To use AD, when creating a file share, you can enable authorization from AD and enable the ball in the domain. In this case, the rights to the file share will be managed through Active Directory.
So, created two file resources: NFS and SMB.
After creating, we look at what operations we can perform. In principle, everything is the same as with LUNs: resizing, snapshots, access type, etc. Now the task is to give these created resources to the host.
Let's start with LUN
LUN we can give on iSCSI and / or FC. This is not a typo, judging by the Aerodisk documentation, indeed, it is possible to give one LUN at the same time on both FC and iSCSI. Why this is needed is not very clear, but the vendor says that this function can be useful for diagnostics. Well, let's say so. In any case, we will do it “in the old manner” and give one LUN by FC and the other by iSCSI. To recreate nothing, let's make a clone of the existing LUN.
We will not describe the process of configuring SAN switches; it is no different from the configuration for other storage systems. Note that on the Aerodisk support portal in the knowledge base there are examples of setting up various options for SAN switches, which is certainly a plus to the vendor’s karma.
We do LUN mapping for FC
We go to the initiators, we see that WWN initiators have arrived from the host. We create a target for the storage system, link the targets and initiators to the device group.
Select the desired LUN and do the mapping through the created device group.
In the application of the administrator’s guide there is a separate guide on how to properly present the storage resources for each of the protocols with settings for popular operating systems. The presentation of LUN for FC did not cause any special questions. On CentOS, the device-mapper-multipath package must first be installed. The host server eventually saw the block device, I realized that this is AERODISK.
By the way, in the process of mapping found a useful thing. You can set the hands LUN ID. By default, this ID is assigned in order automatically, but sometimes there are situations when you need to point it by hand. For example, for a SAN boot (booting the OS from the LUN of the storage system), as well as in large data centers, where there are many different storage systems, and even more LUNs from them. There LUN ID serves for the correct account and fast search. In our opinion, the function - Masthev and Mastuz.
Now we check - we see that the LUN is accessible from two active controllers (the second as the non-optimal path is the classic ALUA).
Format the LUN in NTFS get the disk "D".
Go to iSCSI
Create another LUN on the same disk group. With the presentation on iSCSI had to work. The fact is that for iSCSI, besides the target, the initiator and their connection, there is one more additional entity - the HA resource. A HA resource is a virtual interface that has a virtual IP (VIP) on it that simultaneously looks at two (or more) physical Ethernet interfaces on two different controllers and serves for fault tolerance. Schematically, it looks like this: The
HA resource is bound to a specific RDG. On the same group, you can link another HA resource and give it a VIP to another subnet (it may be useful in life).
As a result, figured out. We created a HA resource, set the iSCSI initiator to Windows, copied the initiator name (IQN) of Windows. Next, we created an iSCSI target for storage and linked the target with the initiator.
Connect LUN to Windows. Formatted, created disk D.
We connect file resources
This process is as simple as SMB and NFS. The only time on Windows is to install a regular NFS client. All these nuances are described in the documentation. File access also requires a HA resource. We created it in the previous step, so we will use the same one.
Both of our file balls are connected with in Windows using a network drive, respectively, G and E.
On this we can say that the basic configuration of the storage system is completed, then the tests for the reliability of the storage system will go further. If we take the total time we spent on the basic setup, periodically peeping into the documentation, it turned out about 30-35 minutes, 10 of which were carried with iSCSI. In our experience, this is very short-lived (on some storage systems of famous vendors, similar operations took several hours), so we can say that the system is fairly simple to learn, logical and convenient for the administrator.