How do we build magical SSD hosting in the Netherlands and the USA with new pricing and operating principles? Is it really magical?

    The article does not claim to be absolute truth and does not reflect the full problems of providing and using hosting services, poses questions, describes problems and some of the methods of solution. It will be useful for webmasters to better understand the specifics of the services used, facilitate the choice of the right solution and, perhaps, will be useful for hosting providers.

    Full perfection is unattainable, but the pursuit of excellence is important, we constantly improve the solutions offered to customers, carefully listen to criticism and do everything to comply with the basic principle of our work - we are happy to make you happier. Sometimes we make mistakes, sometimes making our partners unhappy, but we always try to find a way out of this situation together, because the client is primarily a partner for us, and we are a partner for the client. And listening to each other is extremely important in this relationship.

    Developing a completely new hosting service, which did not yet exist in the hosting services market, we tried to solve a number of issues that have been of great concern to us recently, and possibly some other hosting providers, to take into account the wishes of our customers. They tried to make the service more reliable and understandable primarily for the average user of hosting services. It’s too early to talk about the success or failure of the project, but the first results suggest that we work not in vain, and we'll see ...

    Hosting Services Issues

    The first thing to start solving any problem is a correctly formulated problem and correctly posed questions to it. From personal experience, I can say that a correctly posed question is not just the majority of the solution, the question may well become a full-fledged answer if we break down the problem into its components.

    So, what excites hosting users? Website availability (reliability, fault tolerance), speed, price. At the same time, most can pay special attention to the price, bringing it to the forefront, both for objective and subjective reasons. And therefore the biggest problem, both for beginners and advanced webmasters, is the choice of a tariff plan.

    Choice of a tariff plan

    This choice can have a significant impact on the relationship between the provider and the webmaster during the collaboration. And it’s not about the fact that some hosting providers have a better attitude towards those who pay more (we refrain from giving ratings to such cases, the article is not about that), but about how easy it is to make a mistake due to inexperience and save too much. thereby provoking a conflict situation in the future, and it will undoubtedly be. Novice webmasters often sincerely resent providers for letters about increased workload, write angry reviews on the Internet. Probably everyone went through this, including us. They believe that a site that is practically devoid of traffic cannot create such a problem. And they are right! Because they are too inexperienced and still cannot know that a content management system (CMS), weighed down by many modules, and even not optimally tuned, it can create significant consumption. Or because they don’t realize that the provider, in order to reach more potential customers, could have created a very inexpensive tariff plan, designed only for a small home page.

    At the same time, the choice of more expensive tariffs is no less complicated. Hosting infrastructures for different providers vary to a large extent, as well as methods for accounting for consumed resources. As a result, such an indicator of resource consumption as processor time turns out to be practically meaningless when switching from one service provider to another, since with a high degree of probability the processors on the hosting nodes are different. And even if the user is experienced and can imagine the difference in performance, the degree of optimality of server software settings is already problematic to present, since it depends solely on the experience of specialists in the technical department of the service provider. And there is also an unwritten unwritten parameter - the occupancy of the hosting node.

    What do we have in the end? Unpleasant situations even with customers on more expensive tariff plans, when a user, moving from one provider to another, is surprised to find that what worked there creates excessive consumption here. We receive the next “letters of happiness”. Everyone is dissatisfied, both providers and customers.


    For several years we have been looking for him. We ensured a low price for hosting not at the expense of a low limit on resources, protected ourselves from spammers and dishonest customers by introducing a tariff system that was disadvantageous for them, when it would be profitable to order a service exclusively for the long term, which makes no sense in case of dark goals. Significantly increased the stability of the service without complex technical solutions, a thousand customers created no more than 1 request in those. department for a day, minimized possible conflict situations. The decisions and results were described in detail in our 2012 article, “Stable Hosting - Myth or Reality?” .

    However, all this still did not provide a transparent approach to the charging of consumed services., and also did not solve a number of other equally important issues in the process. Despite the fact that the variety of tariff plans was minimized, like the menu of a good restaurant, the user still did not understand which tariff plan could withstand the number of visitors he needed, how the consumed resources were taken into account, when he could be “asked” to go to a more expensive level or even "drive" to the VPS or server.

    The introduction of a clear tariffication of CPU / RAM / IOPS / BANDWIDTH consumption, as on cloud services, alas, would not be the answer and solution. Ordinary webmasters do not and should not care about these parameters, they only care about visiting their sites and their magical work. So why not start charging resources solely in what is measured by the income of webmasters in visitors?

    Problem statement: CPU / RAM / IOPS resources are practically unlimited, only traffic (traffic) is taken into account

    The result was the formulation of the task of implementing a fundamentally new hosting service, which did not exist in the market earlier, where only traffic, traffic, is taken into account, since there is a clear and understandable coefficient between these parameters. For example, take 100 GB of traffic, is it a lot or a little? For visualization in visitors, we take the average web page size of 700 KB, the number of views is the result of dividing traffic by the average web page size, for example, for 100 GB of traffic we get 100 * 1024 * 1024/700 = 149,796.57 views. Thus, if the average page size of your website is smaller, for example, 200 KB, and not 700, you can get much more views - 100 * 1024 * 1024/200 = 524,288, and vice versa. Of course, these values ​​should be perceived exclusively as indicative.

    And what about the load? Between the consumption of server resources and traffic, there is also a more or less stable connection for 99.5% of Internet projects, so the need to take into account the load disappears. It is enough to include the cost of resources in the cost of traffic and disagreements with webmasters due to the load they create will be completely eliminated, it really will not be taken into account separately. Yes, for some people the script may be more optimized, for others less, but on average the result is really predictable and it can be taken into account in the account for hosting services, and most importantly - to predict the costs of the webmaster with a high degree of accuracy, choose optimal solution for the price.

    Requirements and problems

    The lack of tariffication and an explicit limitation of CPU / RAM / IOPS places special demands on the equipment and architecture of the solution. Our task is to ensure the fastest and most uninterrupted operation of all web hosting sites. And this means that the solution should be built on the basis of nodes with the highest possible performance and at the same time be distributed in order to increase reliability, provide the ability to scale.

    Since modern multiprocessor solutions have tremendous performance and are able to satisfy the needs of thousands of hosting users hosted within a node, special requirements are also set for array performance for file storage and databases. Arrays of SATA / SAS disks are simply unsuitable, since they are not able to effectively cope with the requests of thousands of subscribers - one disk can provide no more than 70-210 read / write operations per second (IOPS), which may be clearly not enough even if you use an array of 12 discs.

    The only correct option in this case would be to build the solution exclusively on solid-state (SSD) drives, providing from 50,000 IOPS or more, which is almost 1,000 times higher than conventional HDDs in performance. A few years ago, the use of such drives significantly increased the budget of the solution, encouraging hosting providers to create “crutches” in the form of hybrid raids or caching CDN servers, when ssd are used for cache or only for databases. And this was called SSD hosting, which, in principle, some do not disdain even now, misleading customers in order to save money as much as possible. Yes, drives are still significantly more expensive than SATA, but the advantages that they offer, both in terms of performance and reliability, are undeniable.

    Moreover, as I wrote recently amarao in his article “SSD + raid0 - it's not that simple” , these disks in the array may not be effective in increasing write performance due to different latencies, unlike HDD - raid0 will wait for confirmation from the slowest disk in the array. Accordingly, it is better to use disks independently and achieve improved performance through scripting rather than raid.

    In addition, the effective disposal of these disks is important. Everyone knows that SSDs are "killed" by rewrite cycles, so creating separate database servers makes no sense, since disks will be disposed of unevenly. Among other things, individual database servers reduce reliability, since in case of problems a significant part of subscribers can immediately feel them.


    In order to increase the degree of fault tolerance and ensure the lowest possible cost for subscribers, we decided to move away from assigning roles to individual nodes. To build the solution, 4 x-processor platforms with ten-core Intel Deca-Core Xeon E7-4850 processors with the ability to install up to 1TB of RAM and up to 16 SSD drives were used. At the same time, in order to avoid the “mega-server” effect when a problem with one subscriber (increased load, attack) causes problems in the sites of all nodes of a node, we split the node into several virtual machines, using virtualization, in which each machine can use maximum available resources, but not to the detriment of the work of other virtual machines. This allowed to increase the degree of fault tolerance, since now in case of a serious load / attack problem on one of the users, only a part of the node's subscribers can feel it (from 1/16 to 1/32 on our nodes). Among other things, the software allows you to immediately block such a problematic client and transfer it to a separate virtual environment to solve the problem, and in the case when the attack is by IP address, move all its neighbors.

    For this purpose, we connected each of the nodes with an Internet channel of guaranteed bandwidth of 10 Gb / s with the possibility of expansion, which not only allows us to provide almost any necessary traffic for our subscribers, but also quickly migrate both individual subscribers and entire virtual machines , quickly create backups to remote repositories and deploy them. The clear relationship between the generated traffic and the consumed computing resources already described above allowed tariffing only traffic (visitors), making tariffication transparent and convenient, and choosing a tariff plan as simple and straightforward as possible.


    Since the launch of the new hosting project (January 2015), we have not received a single dissatisfied client, uptime is 100% and in the future, we hope that this value will be close to 100. Of course, too little time has passed to evaluate everything advantages and disadvantages of the solution, but so far we do not see any significant disadvantages of this solution. Perhaps you will see?

    For all habrahabr readers, we provide a unique opportunity to order the Magic Hosting service with a 60% discount using the promotional code (valid until the end of June): HABRHM2015 - we are waiting for your criticism and feedback.

    What do we offer?

    - The power of at least 4 ten-core Intel Deca-Core Xeon E7-4850 processors is available to you;
    - we are sure that we should provide you with the opportunity to consume as much traffic as necessary, because each of the hosting servers has an Internet connection with a bandwidth of at least 10 Gb / s, with the possibility of increasing to 40 Gb / s;
    - while most hosting servers still use “slow” SATA hard drives, providing no more than 50-140 read / write operations per second (IOPS), we build solutions exclusively on SSDs that provide 50,000 IOPS and more! Up to 1000 times faster compared to traditional SATA hosting! Let your sites fly!

    And in addition, with long-term cooperation - magical discounts, which makes the price affordable even for a business card site!

    What are the limitations?

    - for your chosen tariff plan, only the maximum number of visitors per month is limited - traffic, nevertheless you can purchase as much traffic as you need, increase your tariff plan to the limit you need;
    - since the consumed traffic is inextricably linked with the consumed CPU / RAM / IOPS resources, we practically do not apply their restrictions, because thanks to the efficient equipment, the consumption is instantaneous, which allows using the resource of hosting servers fully and more efficiently;
    - it is forbidden to host projects on the hosting server to proxy traffic, convert media files or perform other similar complex calculations (standard sites do not fall under these restrictions, meaning computing processes that take minutes of processor time, as when converting large video files);
    - it is forbidden to post political sites, sites subject to DDOS attacks, as well as resources blocked for users from Russia by Roskomnadzor or with a high potential risk of such blocking;
    - the network usage standards adopted by the OFISP working group, as well as the Offer Agreement, must be fully respected.

    Also popular now: