Spread S3 buckets into different pools in Ceph Luminous

  • Tutorial
In the process of setting up a new cluster on Ceph Luminous, there was a task to distribute different S3 buckets to different storage devices (in my case, SSD and HDD). There are many instructions on the Internet how to do this in Ceph Jewel, but in the case of Luminous, the process has undergone great changes and the old instructions no longer work. However, in the off documentation this scenario is not described, the configuration process is not too trivial.

Task


Once again I will describe the task: a certain number of HDDs and SSDs are installed in each node of the cluster. It is necessary that when creating an S3 bucket it is possible to indicate on which devices to store it (HDD or SSD).

We distribute pools on different devices


Let's see the current replication rules. By default, there should only be a “replicated_rule” entry:

ceph osd crush rule ls

Thanks to the innovation in Luminous Ceph, it can determine the type of device itself and we can easily separate them according to different replication rules:

ceph osd crush rule create-replicated replicated_hdd default host hdd
ceph osd crush rule create-replicated replicated_ssd default host ssd

Delete the old default rule:

ceph osd crush rule rm replicated_rule

Now create a new additional pool in which we will store S3 objects and place it on the SSD:

ceph osd pool create default.rgw.buckets.data.ssd 8 8 replicated replicated_ssd

And the default data pool will be located on the HDD:

ceph osd pool set default.rgw.buckets.data crush_rule replicated_hdd

Naturally, you can do the opposite and put the default on the SSD.

Configure Rados Gateway


The most interesting part, for which the article was written.

With a new installation, the cluster goes without default Realm. It is not very clear why this is done. Create Realm “default” and set it as default:

radosgw-admin realm create --rgw-realm=default --default

Add additional placement for SSD bucket in zonegroup default:

radosgw-admin zonegroup placement add --rgw-zonegroup=default --placement-id="ssd-placement"

And add additional placement to the default zone:

radosgw-admin zone placement add --rgw-zone=default --placement-id="ssd-placement" --data-pool="default.rgw.buckets.data.ssd" --index-pool="default.rgw.buckets.index"

We use one pool “default.rgw.buckets.index” to store the index of all objects (both HDD and SSD), but you can create a separate pool for the index.

Bind zonegroup “default” to realm “default” and commit changes:

radosgw-admin zonegroup modify --rgw-zonegroup=default --rgw-realm=default
radosgw-admin period update --commit

The final step is to restart the Rados Gateway.

Now we can create a new bucket on SSD (pay attention to the colon, it did not work without it):
s3cmd mb s3://test --bucket-location=:ssd-placement


Or create a bucket with default placement (in our case, on the HDD):
s3cmd mb s3://test


I hope my little note saves someone time while solving a similar problem.

Also popular now: