Best practices for hosting Drupal in a cloud environment

Original author: Azure Blog
  • Transfer
  • Tutorial
image
The main purpose of this article is to talk about the features and best practices of hosting Drupal in the cloud environment of the Microsoft Azure platform [you can read about the advantages of the Azure platform for hosting websites in PHP, Java, Python, .NET and Node.js in this article - approx. . transl.]. In this guide you will learn:

  • How to migrate an existing Drupal site to Azure websites
  • How to configure Azure Storage Module for media content
  • How to configure Memcached Cloud to support caching
  • Best practices for hosting Drupal CMS in the cloud
  • How to scale a Drupal website to multiple regions around the world

How to Migrate an Existing Drupal Website to the Cloud


Migrating your site to Drupal in the Azure cloud is a very easy task. Just use the best practices listed below and you can move your site to Drupal in a cloud environment in just one hour.

Creating an Azure Web Site and MySQL Database

First, create a new website with MySQL and Git support in Azure (you can use the step-by-step instructions ). Please note that this article describes the use of the FREE version of MySQL in the cloud. This version of the database is great for the development phase, but for industrial use, you may need to purchase a cloud-based copy of the database from ClearDB in the Azure Store. A selection of acquisition plans is offeredshared database cluster. See the ClearDB from Azure Store article for details . If a plan with a common database cluster does not suit you, then you can choose a dedicated MySQL cluster. Details can be found at the following link ClearDB mission critical database plans .

If you intend to use Git for your Drupal site, then follow the steps in the guide that describes how to configure the Git repository. Make sure that you have completed the necessary steps described in the section on obtaining information about connecting to a remote MySQL server, as you will need this information later. You may well ignore the final part of the guide, which focuses on deploying your site to Drupal, however, if you are just exploring the Azure (or Git) cloud, this part of the guide may be useful to you.

After starting a new Azure website with a MySQL database, you will have information on connecting to a remote database and (optionally) the Git repository. The next step is to copy your MySQL database to the Azure website.

Copy database in MySQL to Azure website

There are many ways to migrate a database to Azure. One proven way to migrate MySQL is to use the [MySqlDump] tool. The following command demonstrates an example of copying from a local machine to the Azure Web Sites cloud:

mysqldump -u local_username --password = local_password drupal | mysql -h remote_host -u remote_username --password = remote_password remote_db_name

Of course, you need to provide your username and password for your existing Drupal database. In addition, you need to transfer the host name, username, password and database name for the MySQL database created in the first step. This information is available on the connection string that you received earlier. The connection string has a format similar to the following:

Database = remote_db_name; Data Source = remote_host; User Id = remote_username; Password = remote_password

Depending on the size of your database, the copy process may take several minutes.

Your database now lives in the Azure Web Sites cloud. Before posting your Drupal code, you need to modify it so that it can connect to the new database.

Changing the database connection in settings.php

At this step, you will again need information about connecting to the new database. Open file/sites/default/setting.phpin the editor and replace the values ​​of 'database', 'username', 'password' and 'host' in the $ databases array with the values ​​set for your new database. You should end up with something like this:

$databases = array ('default'=>array ('default'=>array ('database'=>'remote_db_name','username'=>'remote_username', 
'password'=>'remote_password','host'=>'remote_host','port'=>'', 
'driver'=>'mysql','prefix'=>'',),),);

Note: if you have $ base_url configuration parameter in settings.php , comment it out, since Drupal CMS will create URL addresses from the set of $ base_url. You can use the $ base_url parameter again immediately after your cloud site is configured to use a custom domain. Save the settings.php file . You are now ready to host your code. Hosting Drupal Code Using Git or FTP The final step is to host your code on the Azure website cloud using Git or FTP.







  • If you use FTP, get the host name and username from the control panel of your Azure website. Then, using any FTP client, upload the Drupal files to the / site / wwwroot folder of the remote site;
  • If you use Git, you need to install the Git repository in the previous steps. You must install Git on your local machine. Then, follow the instructions provided after you created the repository. See the article that talks about configuring Git.

How to configure Azure Storage Module


Immediately after installing Drupal, you need to enable and configure the blob storage module, which uses the Azure Storage cloud storage to save media files. Use the following instructions to configure this module. Log in to your site as an administrator and enable the Azure Storage module in the modules tab. If this module is not in the list of modules, install it from here . Click the Configuration tab , then select Azure Storage in the Media section . Click the Add button

image_thumb [15]



image_thumb [13]

to add storage account information that will be used by your Drupal sites. Indicate all the details: the name of the storage account, the name of the existing container (the container must be public), the address of the blob storage URL, the primary access key. Use the custom URL parameter if your repository is associated with a CDN and enter the CDN address here. If you are not using a CDN, leave this field blank. Go to the Configuration tab again and select File System in the Media section . Select an item and save the setting by clicking Save Configuration. Now select Structure-> Content Type. Then click on manager fields of type

image_thumb [11]



image_thumb [9]

Article. In the dialog box, check if you have an Image label that will display the Image Upload option when creating new materials. If you plan to support downloading files (of various formats), then enable the checkbox for File Upload as well. Now click on edit for the Image label and select the Azure Storage option for Upload destination . Do the same for all the other tags you have for downloading files (with FILE field types). Repeat the above steps to update the configuration for any type of content that supports downloading images or other files.

image_thumb [7]



image_thumb [6]



How to configure Memcached Cloud to support caching


Subscribe to the Memcached Cloud service, which offers memcached as a managed service. This service is currently available in the East US and West US regions of the Azure platform. If you want to manage your own memcached servers, then you have the easy opportunity to configure them using a Linux-based Azure virtual machine. This article discusses the use of cloud-based Memcached.

Log in to the Redis labs portal and select New Memcached Subscription. Select the cloud platform / region for the service and the plan that you will use. A free plan (25 MB) is great for development and testing purposes, but it won’t provide a sufficient level of performance required for a site in industrial operation. Press on

image_thumb [17]

Select and create your own memcached endpoint. Enter all the required information. The module that I use for Drupal does not support SASL (Simple Authentication and Security Layer), which can be added to your memcached server, which is why I unchecked this option at the current step. Enter a resource name and click on Activate . If you already have a module that supports the security level for memcached, you can enable it during the creation of the memcached endpoint. Now you can manage your memcached service from the Redis Labs portal . To get the server endpoint, click on the Resource Name you created . Remember the value of Endpoint

image_thumb [20]



image_thumb [24]



image_thumb [25]

which you will need later to configure Drupal. Download and copy the memcached PECL extension from here . Please note that this extension is for PHP5.4 (32 bit). If your site uses a different configuration, then select the appropriate library from here . Install memcached module for Drupal. In the Azure website control panel, update the site configuration. In the app settings section of the Configure tab, add the PHP_EXTENSIONS parameter with the value “bin \ php_memcache.dll”. In the settings.php file in the $ conf variable

image_thumb [27]



add memcached server. If you do not specify any of the servers, then memcache.inc will decide that the instance of your memcached server is running on the local machine on port 11211 (127.0.0.1:11211 or localhost: 11211), and since it is not there, your application will not will work.

The following array example demonstrates a pattern:

$conf['memcache_servers'] = array( 
'pub-memcache-10939.us-east-1.1.azure.garantiadata.com: 
 10939' => 'default' 
);

See the Memcache API for Drupal manual for more details . You can use only one server for all Drupal operations with the above configuration, but if you prefer to split your data set into several Memcached servers, you can do this easily and simply by creating new Memcached server endpoints in the Azure Store. Below you will find a simple setup with two Memcached servers and in this example all data will be cached in Bucket 1, excluding the “pages” that will be cached in Bucket 2.

 $conf['memcache_servers'] = array( 
  'server1_hostname:server1_port' => 'default', 
  'server2_hostname:server2_port' => 'pages' 
  ); 
  $conf['memcache_bins'] = array( 
  'cache_page' => 'pages', 
  );

The most common caching approach is to mix the use of a content delivery network (CDN) and a Memcached service. Drupal uses a database to store the cache. If any content is modified, by defining the change, Drupal will mark the cache as obsolete to maintain the consistency of the content. Memcached replaces the Drupal internal caching system.

Best Practices for Drupal CMS in Azure


General recommendations for any site in the cloud:

  1. Plan for the future : you need to monitor the performance and requirements of your site and user traffic patterns to plan the scaling of the infrastructure up and down on demand;
  2. Back up your site : back up your site [Azure Web Sites offers a built-in automatic backup and restore function - approx. transl.]. Test backup and recovery procedures.

Now let's focus on Drupal CMS and learn some best practices for building and managing Drupal CMS in the Azure Web Sites cloud environment.

Security configuration

  1. Delete all temporary files before placing. During file editing, you may have temporary backup copies, for example, .bak files, files with names ending with the “~” sign, settings.php.orig. These files can be viewed through the browser, if you do not deny access to them through the settings in the web.config file using the Rewrite URL rules. Such files can become a vulnerability of your site and open access to it for attackers. Delete all such files. You can restrict access to such files by adding rules to the configuration file, by denying access to any type of file. In addition, you can easily automate the task of deleting temporary files using the WebJobs background task feature offered by Azure Web Sites.
  2. Enabling SSL for login. Drupal does not require the use of Secure sockets layer (SSL) when a user logs in. This makes it easier for attackers to gain administrative access to your site. Install and enable the Secure Pages module from drupal.org. This module ensures that each user is served over SSL.
  3. Disable insecure download fields. Disallow unnecessary File and image fields that allow users to upload files to the site. An attacker can use these features to gain control on your website by downloading a file that can cause chaos on your system. Enter restrictions on the types of files whose download is allowed, remove the ability to download files of the type .exe, .swf, .flv, .htm, .php, .vb, .vbs.
  4. Turn on the Security Kit. The Security Kit module offers security improvements to your site, such as protection against attacks such as Clickjacking, Cross-Site Request Forgery (CSRF), Cross-Site Scripting (XSS) and others. It is recommended to use the default configuration for this module, however, if your site has any special requirements that the module may not meet, then you can configure the module’s operation parameters yourself. For details, refer to the module page .
  5. Do not use common usernames for administrator accounts. The name of your administrator account should not be admin, administrator or root , which are widespread and may pose a security vulnerability in your site. Use complex or unique usernames to avoid hacking.
  6. Hide the output of information about site errors from your users. These error messages can contain important sensitive information about your site or server and give it to visitors. In order to configure your site not to display error information to users, open the Logging and errors page at / admin / config / development / logging and select None from the Error messages to display and click Save configuration.
  7. Enable the Password Policy module. Content editors on your websites may decide to use easy-to-crack passwords, which will open the door to your site for attackers. In order to reduce the possibility of vulnerabilities, enable the Password Policy module , which will include mandatory strong password policies.

Performance configuration

  1. The minimum cache lifetime is less than 5 minutes. To increase the responsiveness of the site and performance, the minimum cache lifetime prevents the clearing of the page and block cache after changing for a specified period of time. If you set the minimum cache lifetime to less than 5 minutes, the server will work more to deliver fresh content to site visitors. To set the minimum cache lifetime to 5 minutes or more on your website, open the Performance page at / admin / config / development / performance and select a new value for the Minimum cache lifetime parameter. Set this parameter as large as possible given the desire to have pages in the cache and the need for visitors to get fresh content as quickly as possible.
  2. The maximum page cache age is less than 5 minutes. When the maximum age of the page cache is set to less than 5 minutes, the server is forced to regenerate pages more often. This lowers site performance. To set the parameter value to 5 or more minutes, open the Performance page at / admin / config / development / performance and select a new value for the Page cache max age parameter .
  3. Optimize CSS and JS Scripts. When CSS / JS optimization is turned off, your users will receive a decrease in page speed and an increase in server load. To enable CSS optimization, open the Performance page at / admin / config / development / performance and set the Aggregate and compress CSS files option. To enable JavaScript optimization, open the Performance page at / admin / config / development / performance and set the Aggregate JavaScript files option.
  4. Turn on page compression. When page compression is off, site visitors are faced with longer downloads of these pages. Compress pages before storing them in a cache, so as to reduce network sharing with the backend. To enable page compression, open the Performance page at / admin / config / development / performance and make sure that the Cache pages for anonymous users parameter is set and set the Compress cached pages parameter.

Configuring the Azure website

  1. Through the Azure Website Management Portal, enable server logs and use Azure Storage to store them. You can use the same storage that you configured to store media content. See the Azure website diagnostics guide for more information .
  2. Set up automatic scaling, which will automatically increase or decrease the number of instances of your site. See the Auto-scale Configuration manual for details .
  3. Use the Basic or Standard modes of Azure Web Site, which provide you with dedicated VM instances, high performance, support within the framework of SLA. For more information, see Features by tiers .
  4. Configure the site to operate in at least two Medium or Large instances (the instances correspond to the specific VMs for your website). If you use one instance, then in case of problems with its VM your site may become inaccessible. When using two copies, you can avoid failure due to the failure of one point.
  5. Perform stress testing of your site using Visual Studio or other tools to make sure that the site’s scaling configuration really allows you to serve calculated traffic.
  6. Set up automatic disinfection of your site, which restarts your VMs based on certain indicators of the state of the site. See the How to Auto-heal your website article for more information .

Code Best Practices

  1. Avoid making changes to the Drupal core. Making such changes complicates the management of Drupal versions and updating your website, as well as making maintaining your site more difficult as it grows.
  2. Avoid using a large number of modules. Drupal offers you flexibility by allowing you to add modules that extend the functionality of the CMS. But along with this, too many modules can affect the performance of your site and slow it down.
  3. Use web.config to launch the Azure website. Azure Web Sites Web sites use IIS, which allows you to use the web.config file to manage protection against unauthorized access to files and manage the Rewrite URL. Use the web.config example provided here for your Drupal site. You need to deny Application Request Routing cookies, which pin certain users to specific VM instances. Turning this feature off will allow your website to use the normal load balancing behavior that Azure provides out of the box. See the Disable ARR Cookie article for details . For the appropriate configuration, add the following section to your web.config file immediately after:


Drupal scaling to several regions around the world


By placing website copies in different regions around the world, you reduce the risk of a single point of failure when you rely on only one website copy in one data center. The key thing to keep in mind when working with cloud solutions is understanding that each component (website, database, cache, etc.) may fail, and therefore your decision should be able to handle these failures process and be uptime. This will reduce operating costs, which may be due to service failure.

For this scenario, you need to host your Drupal site in at least two regions, such as East US and West US. Consider the Active-Active (Master-Master) configuration for the Drupal website on Azure:

  1. Two Azure websites linked to their respective MySQL databases
  2. Both MySQL databases are synchronized using database replication
  3. Using the Azure Traffice Manager service, user traffic is balanced for regions using one of three Performance, Failover, or Round Robin methods.
  4. Memcached Managed Service configured with failover
  5. Azure Storage uses geo-redundancy with Azure CDN

This architecture is very easy to create, but if your application has special requirements, then you can customize this configuration for yourself. You can make the following changes:

  • create master slave website configuration
  • create a configuration with one master and many slaves
  • Use your own Web Jobs-based replication process
  • automate management processes using Web Jobs

Scaling a website

Scaling up Azure Web Sites requires two steps: changing your Web Hosting Plan mode to a higher level and configuring certain settings after moving to a new plan. Both actions were covered in this article. Higher levels, such as Standard mode, offer high performance and flexibility in how your resources are used. See the How to scale Azure website documentation for details .

Scaling the database

Your application depends on two components - the Azure website and the database. Depending on how you created your database, you are offered several ways to scale your database for high availability and fault tolerance. For example, there are two scenarios:

  1. If you use the CrearDB service, then you just need to configure ClearDB high availability routing (CDBR). The ClearDB service offers database replication between pairs of regions (for example, East US and West US), but you can create and use your own database replication tools using Azure Web Jobs.
  2. You could configure MySQL Cluster CGE , which offers all the necessary tools for managing MySQL clusters in Microsoft Azure virtual machines. Please note that in this case you will have to independently manage all MySQL clusters, database replication and scaling operations.

Memcached Cache Scaling

Redis Labs' managed Memcached Cloud solution offers high availability and resiliency plans. You can scale the memcached cloud between regions, in the case where a single memcached point may not be enough for fault tolerance. See the Memcached Cloud Features article for more information .

Configure Traffic Manager to route your traffic

The Azure Traffic Manager service allows you to control the distribution of user traffic to specific endpoints, which can include websites. Traffic Manager works using an intelligent policy engine for querying DNS and your Internet resources. Your cloud services or websites can be launched in the same data center or in different data centers around the world. Traffic Manager offers three routing methods:

  • Failover : choose this method when you have endpoints in the same or different Azure data centers (regions) and you want to use one primary point for all traffic, but have a backup in case the primary point fails or becomes unavailable. See the Failover load balancing method for more information .
  • Round Robin : choose this method when you want to distribute the load evenly [or with a different weight for each point - approx. transl.] between a set of endpoints in one data center or between different data centers. For more information, see the article Round Robin load balancing method .
  • Performance : choose this method when you have endpoints in different geographical locations and you want visitors to your site to use the “closest” site to them in terms of the lowest latency. For more information, see the article Performance load balancing method .

Create a new instance of the Azure Traffic Manager service through the Azure Management Portal. Go to the control panel of the created service, click on ADD to use the websites that you want to add to the traffic routing. In the window, select the Web Site service type and select the websites you want to use. We have added both endpoints to the traffic manager. If you click on the URL link of your traffic manager you can make sure that your requests are routed to your sites.

image_thumb [31]



image_thumb [33]



Add Website Endpoints



Conclusion


We covered the basic tasks and topics regarding moving a Drupal site to the Azure Web Sites cloud environment. The solutions we discussed above will simplify the transfer of Drupal sites to Azure for projects of any size. Now you can begin to build and scale your Drupal sites on the Microsoft Azure platform.

Sitelinks


Only registered users can participate in the survey. Please come in.

What additional materials do you need?

  • 53.8% Working with PHP projects in the cloud as a whole 21
  • 48.7% Best Practices for PHP Projects in the Cloud 19
  • 46.1% Working with data and databases in the cloud 18
  • 56.4% Cloud Scaling and Failover Scenarios 22
  • 51.2% Cloud Architecture 20

Also popular now: