Why serverless technology is a revolution in product management

Original author: James Bezwick
  • Transfer
image
Serverless architectures fundamentally affect the limiting factors that constrain product development.

Product managers in the organization act in a variety of roles. Sometimes they are called the “customer voice”, sometimes they play the role of “corporate cat-hazard” . They are fat-skinned brethren, people who inexorably lead you to deliver the product, despite any ethics or excuses. A good product manager rarely becomes someone’s idol, but it is thanks to the work of such people that most of the technological solutions that you have ever used are embodied.

PM is always looking for the highest quality tools to solve the problem. We know that competitors are constantly stepping on their heels, customers are tired of waiting, so we constantly have to act smarter, quicker and more efficiently. With the advent of serverless technologies, it was not immediately clear how they would fit into the product management wishlist. However, after working with these technologies for a year, I see that they solve some problems of software development that seemed to us forever.

Paradox: team size and performance


The first rule of product management says: the amount of work to be performed is constantly growing. Backlog continues to swell, and it collapses to zero in only one case: when the product is eliminated. The most difficult thing is to turn the most important elements of your backlog into a product ready for delivery. All other things being equal, it is believed that the following relationship should be observed:

image

If one digger can recycle 1 ton of soil per day - it is assumed that 10 diggers can recycle 10 tons. As a rule, resource management in companies is built on this principle: if you want to increase sales - hire more sales people. When developing software, when the backlog grows, the temptation is to simply increase the team. However, in cases with complex products and services, approximately the following schedule usually looms over time:

image

It’s rare to see how a huge team works at the Stakhanov pace; but it often happens that a small team with enviable constancy progresses by leaps and bounds.

Many startups have this kind of mistake: as soon as the product becomes successful, more new developers and managers are added to the staff. Soon it suddenly turns out that the speed begins to fall. What is the matter? That the first developers were more talented, that the bureaucracy grew in the company, and how was the architecture planned?

I think all these are just symptoms, not the root of the problem. The problem itself comes down to the interaction of three critical factors, only two of which can be directly controlled:

  • Fragility is the effect of new changes. If a new feature affects only part of the machine, then it is easy to test and implement. If it affects all the elements of the machine, then testing becomes more difficult and at the same time more important, and the implementation requires an order of magnitude more time.
  • The volume of work is the smallest piece of work that can be performed by a team and gives a productive feature at the output. For example, the result is “Pay with Alexa” rather than “get together and discuss how to make payments with Alexa”.
  • Complexity - how much knowledge is required to implement a feature. Is a developer who knows how to write a feature capable of doing the same thing within the organization? What additional changes must occur in order for progress to gradually slow down, and the product ceases to grow with features that are valuable from the point of view of the client?

I am particularly interested in why, at the dawn of existence, all these factors are balanced optimally: there is nothing fragile, the amount of work is not particularly large (and you can usually agree with the customer), and the complexity is practically absent. So, if a team needs to create a GDPR-compliant site, then they will have time to research this problem, a decision will be made quickly, and the team will be sure that the site works exactly as planned.

In larger companies, these factors are combined, resulting in a growing team size, and the volume of work performed is reduced. To create a GDPR-compatible website in such a company, you will need a lawyer’s signature, approval of marketers, approval of the project at the level of the board of directors, A / B testing of the least disruptive implementation, coordination of development interruptions with the team of admins, coordination with the deployment plans adopted by other teams - the list goes on. Even with this amount of control and the number of processes, the team is much less confident that it will succeed, due to the fragility of the entire system and many unknowns in the ecosystem.

Expanding this example to the size of a real project, in which there may be dozens of features and hundreds of changes, it is easy to understand how, due to the influence of these factors, the graph of the ratio “team size / work volume” turns from the first to the second. As the team grows, you are doomed to do less work per unit of time, no matter how you try to outwit the organizational colossus. Or it just seems so - but what, then, is to be done?

How to hack all three factors


This problem has been haunting me for many years, which prompted me to take up the study of its possible causes. Is it possible for startups to make rapid progress? For a while, I just thought so, faced with the difficulties of product management in large organizations. However, then I looked at all three factors more closely.

Fragility is always to your detriment - it provokes an ever-growing technical debt in any project of any size. The situation resembles "half-life on the contrary": any element of the program grows over time and because of this (during development) it becomes more fragile, and all this is compounded with each new line of code.

The amount of work is not related to a specific feature of the product (“Pay with Alexa”), but rather to differences in the outlines of the infrastructure if we compare the states “before” and “after”. The more difficult the “after” becomes, the more the amount of work performed is reduced. That's why in many companies, when planning work, the emphasis is shifted from the needs of the client (“Pay with Alexa”) to the needs of the organization (“Meet and discuss who should be involved in the implementation of the feature“ Pay with Alexa ”).

Complexity is a combination of social, organizational and technical factors that directly affects the duration of a search for a suitable developer, the ability to treat programmers as multi-tasking people who can be entrusted with any job. Moreover, it is complexity - the very aspect that is likely to remain invisible, undocumented and misunderstood. A developer can write a React application at home and release it himself, but in the organization he will have to take a dozen extra steps that will take his time, and features interesting to the user will not change at all. The programmer will spend most of the day on them.

Together, these three factors form a vicious circle, so that the amount of work performed decreases, fragility increases, the developer manages to complete fewer and fewer features, and your product is overgrown with complexity as an invisible mud. Consequently, the growth of the team does not help, and the speed can only be increased consciously by cunning with numbers and indicators. A classic symptom: the “meeting held” position appears in sprint reports.

In large companies, I had to observe a couple of flawed approaches designed to break this cycle. The first is “Large-scale Agile”, resulting in huge meetings in which absolutely all participants in the development of a particular feature participate and attempts are made to coordinate work. So try to coordinate work and understand the complexity. Such an approach is good for food distribution companies delivering enchanting lunches, but in our case it does not work. The fact is that as the size of the group of priority projects increases, it becomes more and more, and they themselves decrease. Therefore, it is not possible to fundamentally solve the problems of fragility and complexity. Over time, large-scale Agile gives a tactical list of tasks that resembles a shopping list, and is less and less like a holistic path from one thoughtful feature to another.

Secondly, intra-corporate “innovation groups” often try to push peripheral changes, in the hope that this work will take root in a fragile machine, and the whole structure will change for the better. This approach gives a bizarre side effect: the belief is consolidated that only such “groups of innovators” have the right to make changes to the process. Therefore, a similar method also does not solve problems with organizational complexity.

Having seen many years of various failures, I came to the conclusion that it is necessary to hack all three factors in order to prevent their combined effect on the work being done and to cope with inertia:

  • Fragility should not increase in future versions or as the product ages.
  • The piece of work should not be less than what is required to create a feature significant from the point of view of the user.
  • Complexity should not affect the work of a single developer.
    If you succeed in adopting these ideas, then you will be protected from rock, which pursues all the software products in the history of mankind. It sounds great, but how can this be achieved?

If you succeed in adopting these ideas, then you will be protected from rock, which pursues all the software products in the history of mankind. It sounds great, but how can this be achieved?

Serverless Technologies Break Limitations


Thanks to the advent of cloud technology, it was possible to pave important trails to a new “hacked” state. In general, with the advent of clouds, the process of delivering a software product became more compact, as a provider began to do a lot of routine things for you. Before the clouds appeared, if you needed to implement a new user feature, you had to order a server, install equipment on racks, agree on laying networks in a data center, and then maintain this equipment, which wears out over time. In the cloud, all this can be rented, thus getting rid of dozens of organizational items and saving entire months.

In addition, by eliminating the need to upgrade equipment in a data center and providing access to hardware on demand, we reduce both fragility and complexity. Putting programs to use is much easier than in the old days. However, over time, the burden of administering an extensive virtual infrastructure has increased significantly, and many outdated delivery methods have remained unchanged. Using the clouds, the team can be significantly increased before the work starts to slow down - however, it starts to slow down, one way or another.

Serverless technologies radically change this dynamic. The serverless application consists of small pieces of code written by your team (the so-called “glue”) and functional “black boxes” that are managed by the cloud provider. The black box simply accepts a configuration and reacts to changes. In an application with a high-quality architecture, a standard part of the operational work associated with the operation of the application falls on the standard black boxes. The application itself is no longer a monolithic function, but a federal structure of functions and black boxes.

In practice, this dramatically affects the three factors that I mentioned above:

  • Fragility is reduced due to zero infrastructure management costs and weak binding. In our own projects, it was observed that the code base as a result of such changes can sometimes be reduced tenfold.
  • The size of the “piece of work” is usually comparable to the cost of creating a new feature, since it becomes trivial to create new versions of functions or completely new functions that were not previously required.
  • The complexity does not affect the developer - if he can write a function that processes credit card payments, then there’s practically nothing to do in addition to this code in the serverless application, no organizational wrappers and no consideration of the ecosystem, because of which work could slow down.

When managing even very large serverless applications, it is easy for the product manager to take a closer look at the very few elements that were affected by the changes made. In addition, it is easy to launch two versions competitively by placing flags of features. Moreover, it is usually not even necessary to demolish old versions of code.

In serverless applications, the infrastructure is always built on the periphery, and you write only the necessary minimum code that combines fully managed services. You never have to think about them from an operational point of view. We are not trying to control the monolith, clean up the old code, or view the whole system from a bird's eye view.

Why is it immensely important


As the pace of change increases, it becomes increasingly difficult to predict how your program will look in the future or what users will want from you. Therefore, attempts to write code “for centuries”, such that it must work in the future, in spite of any changes, become more and more futile. We have seen how bad code reuse is in most companies, and how adherence to obsolete platforms slows down progress.

Now everything is arranged so that the old system is developed and maintained for as long as possible, until its support begins to take away from the programmer almost all the time. After that, the company starts all over again with the new system, solemnly promising not to repeat the mistakes made in the old one. When three factors sooner or later strangle the new system, there is a technological “forest fire”, after which again you have to start all over again.

We are turned on the fight against the symptoms of complexity, which is why so many paradigms come and go, leaving no significant trace in the history of product management. Serverless development, in turn, allows the team to minimize the increase in complexity and continue to deliver a valuable product at a fairly even pace, without falling into classic traps that for decades have remained the scourge of any software development.

The serverless paradigm is just beginning to develop, but it already seems extremely promising. At a time when the client requires you to have new features like never before, product managers can finally acquire a platform that allows you to think precisely based on the preparation of new features. This process is not hindered by increasing organizational complexity, and also does not stop due to excessive fragility.

Also popular now: