Contacting cloud hosting providers and their potential customers
As the author of the above picture very correctly voiced , many beginners (and not so beginners) face the problem of a high input threshold for knowledge and understanding. Inexplicably expressed?
Here, to me, as an inexperienced young man with an idea that will turn the world around, it is not clear: where to get all these quantities of GET, POST, INSERT, and most importantly CPU.
There are tons of solutions to this problem.
For example, to publish standard cases such as “a website on WordPress with 10k hosts per day generates such a loading profile”. Conveniently? Yes. Simple and inexpensive for a hoster? Yes. Widely applicable, the accuracy of the assessment is sufficient? As one of the discussion participants in a previous post noted, “one module in WordPress can increase the CPU load by ten times.”
You can give demo access (or, on the other hand, buy a
And here I came up with the thought that had long been knocking Klaas ash in my heart
And if you create a virtual server on the hoster, fill in the entire site there and set the stress tester on it, but not from the outside, but from the inside? If you feed the stress tester not an abstract list of URLs and rules, but a live web server log?
For a site, such a test will not be different from the actual load: you can even turn off DoS protection, which bans “gluttonous” addresses - after all, a stress server can pretend to be the whole Internet for a site.
This stress test will not load the external channels of the host (and the client testing its future website), simulating not a spherical horse in a vacuum, but a real herd in a vacuum of "superfast channels", because in the context of the problem under discussion, we primarily need not a stress test " endurance ”, and the amount of resources consumed by real customers on a real site.
This test can be run as "accelerated", downloading the entire ASAP log, and at a speed of 1 to 1, and indeed on any selected one. Yes, testing the daily log will take a day, but does it interfere? But the hoster can plan the test so that the maximum number of requests from the log is processed at the time of minimum loading of the site. Moreover, until delays start to affect the logic of the site itself, such testing can be prioritized to minus infinity: well, “virtual” users will get their page in not in 2 seconds, not in 0.2, but in 22. But the number of CPU cycles, requests to the base, IOPS, traffic will be counted correctly.
The input log can and should be corrected by modeling slashdot moments (and to do this is quite simple - take and mix in copies of real sessions with modified IP addresses).
All steps are completely transparent and understandable even to a complete beginner.
The disadvantages of this method are mass
This creates a load on the hoster, comparable to real hosting. Minus output channels, plus daily load balancing, plus priorities - it will eat CPU, IOPS and the like for real money anyway.
This is quite time-consuming to develop: it will be necessary to create a virtual server structure that is isolated from the Internet, isolated and prioritized. Not rocket surgery, perhaps, but development costs money.
This raises a lot of questions about Personal Data - uploading real logs to the hoster, even passed through some obfuscator (which still needs to be developed), is a delicate matter.
This imposes many conditions on the client: it’s not just a startup, but a migrant with an existing website and a live log. What log is not yet available from every shared host. It is difficult to create a "non-living" log from scratch, not everyone can predict which part of the site will be visited most often. The notorious “one plugin in WordPress” can sit on a page where 1 user out of 1000 goes - and maybe on a page where 999 out of 999 sit (and they don’t go to other sections at all).
Well, of course, not every site can be transferred to the cloud by simple copying - that is, even in order to estimate the cost of hosting, the webmaster client must spend time and / or money on the work of the encoder to adapt.
There is a palliative option
The hoster uploads the image of a virtual machine (more precisely, a bunch of two machines, a hoster and a stresser) that the client launches on his laptop, on the NAS, and anything else - this is already a question for the webmaster client.
Pros compared to the above: there is no leakage of user data, there is no consumption of host resources.
The minuses are also quite obvious: firstly, if a webmaster has a home machine to cope with such a load on the site (at least at a "very low pace"), then he most likely does not need cloud hosting. Secondly, the leak of the billing system and the hosting itself in the provided virtual machine image; however, you can give a stripped-down version suitable only for resource calculation. Thirdly, it is necessary to very finely take into account the effect of virtualization on previously unknown hardware.
An additional bonus is the ability to develop for a specific hoster platform.
And the last: both options, it seems to me, would be useful for non-cloud hosters.