Why there are no simple decisions about which is better - buy servers or optimize code

    In response to this article , about choosing a server purchase or optimization.

    I would like to touch on the factor of human psychology in decision making, its underestimation and the importance of influencing the final result.
    At the bottom of the article there is a thesis statement, who do not want to read.

    How usually happens and why

    Indeed, there is such a widespread opinion that it is easier and more reliable to buy hardware than to optimize the code.
    Another question is whether there have been reliable studies on this topic? I think not, and this only confirms the thesis of the article "Programming as a new type of human activity . "

    However, let's try to build a thought experiment with some assumptions. The market is a place for natural selection. Companies are selected for survival. Who served the customer better, earned more money and managed to stay on the market, outperform competitors - he survived. Feedback provides the evolution of companies.
    And judging by the fact that companies invest millions of dollars both in human potential and in servers, both approaches take place, perhaps even a combination of both. What was said in the original article in the comments (combinations of approaches are not uncommon in other cases in life, for example, hawks and pigeons ).

    From which we can conclude that both approaches exist, and there is a third one that makes up the proportion. And the success of their application is determined by the survival of the company in the current market conditions, which, of course, cannot be reduced to a simple formula "what is cheaper."

    It’s enough to recall, for example, the phrase “Strategic miscalculations cannot be compensated for by tactical successes” by von Clausewitz, as well as examples of strategic decisions of companies that did not lead to instant cost reductions (Apple: iPad, iPhone, Google: PR, etc), but ensured convincing success on the market.

    Closed loop problem

    Many things about business and project management, however, boil down to simple models. Which, of course, does not negate the limitations of such models.

    The meaning is simple - you have two or more entities, one of which cannot be substantially developed without the other. A kind of deadlock.
    For example, you are a startup. You need to hire programmers, but there are no clients yet. And there are no customers, because there is no product that the programmer creates with his labor. And until the idea of ​​resolving the topic through investors became widespread, there was no such abundance of startups.

    A more complex example is the choice of project architecture in an unfamiliar market. To design an architecture, you need to get real business requirements. And business requirements will appear as an appetite that comes with a meal, only in the process of exploitation. The solution is “herak, herak and production”, in more harmonious names from different experts - Agile, Continuos Integration, BDD, etc.

    The value of understanding the situation through such a simulation is that you can transfer the experience of the solution from one sphere to another.
    Just the problem of choice - code optimization or server procurement - sometimes comes down to this one.

    The solution pattern is to take one of the parties as the base, and express the other through it with some kind of formula.

    Usually you can’t just take a project and install a second server, preparation for scaling is still required.
    And at the same time, the servers are bought for a certain calculation.

    The solution is a phased approach with a little research, when you do phased scaling, simultaneously determining the dependence of the required parameter improvements (response speed of certain pages, for example) on the number and purpose of servers.

    Simplification effect

    There is a point that I casually touched on above. The fact is that we, in fact, simulate the situation in the head. And there is always some kind of mistake, the difference between reality and the model in the head. You need to remember this, and not reduce the problem to either making an unambiguous decision, a la a silver bullet, or to the one-time use of measures.

    Simplifying a situation is usually too expensive. For example, when the opinion of the leading developer is ignored that the architecture is shit and the technology needs to be changed, and the decision maker (the decision maker) is sure that in the old fashion he just buys another database and a web server (because it used to roll), then this leads to downtime in the work of the project and inability to respond to market challenges, which often ends fatally.

    As well as the importance of the work of administrators (and hiring expensive, good, good admins), without which poorly configured servers can also serve a poor service. For example, a poorly configured Postgres database can kill all the advantages that it would give when scaling.

    In general, in the original article almost no mention is made, for example, of the admins. Sometimes tuning by the administrator of the system can speed up overall performance, replacing screws on his advice can drastically increase the speed of the project, and timely advice to increase the thickness of the channel can reduce the risk of rapid traffic growth.

    And this does not apply either to code optimization or to the purchase of new servers, and at the same time it can close the lion's share of problems.
    By the way, did you know that a good admin is usually a great programmer, a specialist in process automation and a lot of good fairy?

    Dunning-Krueger effect

    As already noted, the psychology of the participants in the process is important.

    More specifically, cognitive distortion .

    It can be noted, for example, the problem when people gravitate to already proven solutions, as well as the fact that on average a person does not like to take risks.

    But the huge harm that I personally attribute to first-order problems is in reassessing my capabilities and abilities.
    There is even a special effect, which is in the title, which says that the smarter and more experienced a person is, the more he doubts his abilities. And the more amateur he is, the more self-confident and inclined to draw wrong conclusions and actions in a given field of activity.

    For me, a typical example is the phrase about refactoring. As you know, much in nature has a normal distribution, including, according to some sources, people's talents. According to Brooks, capable and ordinary programmers differ in performance tenfold.
    It can be assumed that there is also a big scatter in the quality of work. True, due to the lack of objective quality metrics, such studies are few.

    Nevertheless, such a situation is quite typical. The choice is made in favor of total code optimization. Programmers are not clearly set the task of a manager who is confident in their abilities - just saying “do refactoring to fly” is enough.
    Programmers catch the manager “to rewrite everything in OOP on the latest version of the framework / standard” (some simple to beautiful, but monstrous and heavy Symfony, or to rewrite the old system code with pluses demonstrating the knowledge of all stl), so that it’s “everything is like in books / with the guru ”(sticking patterns).
    As a result, code is written that slows down 10 times more, eats memory, processor time, but which is easier to maintain at times.

    After failing the deadlines, the bosses lead more competent PMs and an architect who either make urgent fixes and buy iron, or rewrite from scratch (as time permits).

    For business, the situation looks understandable - programmers and PMs cheated, did not improve anything. I had to attract a guru, and they quickly changed something quickly and bought servers. So, you can squeeze something out of my guys quickly and make them buy servers.
    At the same time, a self-confident businessman does not listen to the words of the guru about “the need for clear statement of requirements, hiring expensive specialists, involving an architect to select a technology stack to work out, highlighting special optimization steps for scaling”, considering the words incomprehensible to him as another ploy of people to divert resources and potentially missed opportunities.

    Therefore, the habit of exposing your confidence in “facts” to doubts before making important decisions is a good way to gradually grow over your misconceptions.


    The initial problem does not boil down to the simple “what is cheaper” factor. And, as both Demarco and Ashmanov wrote, much lies in the plane of human relations, and in the plane of psychology, of the personality of the participants in the process.

    Which, in my opinion, is given insufficient attention.

    If to summarize

    The following does not apply to cases, for example, when you have one site that began to slow down at 1000 visitors per day, because it is CMS ***, ****** or *******, and the company has one person (i.e. you).

    1. Before making a decision, research is needed with the participation of high-class technical specialists
    2. As a result of research, you need to understand the current situation and obtain from the decision makers a further vector for the development of the project and business
    3. After this, an understanding should appear:
    - what resources are currently can be allocated (monetary, human, temporary) to overcome the problem
    - what are the deadlines for resolving the problem and implementing the solution
    - what are the risks and how to work with them if the current problem cannot be overcome
    4. Separately, a clear indication of the requirements for performers - programmers, admins. Sometimes admins can save your project without buying servers or optimizing the code, just pay them well. Like programmers.
    5. After that, you need to perform an iterative solution to the problem, each time choosing the optimal ratio of resources for optimizing the application code, tuning servers by administrators and purchasing iron.
    6. After achieving the required indicators, create a business process (or improve) aimed at strategic work on quality for the given metrics (expected traffic growth, growth in the space occupied by screws, and so on). The process must sometimes allow a complete change in the technology stack if necessary.

    Also popular now: