Machine Learning for Managers: The Sacrament of Sepulation


Once again, working with a company doing a project related to machine learning (ML), I noticed that managers use terms from the ML field without understanding their essence. Although the words are pronounced grammatically correctly and in the right places of sentences, their meaning is no more clear to them than the appointment of sepules, which, as you know, are used in sepularia to seperate. At the same time, it seems to the team leaders and simple developers that they speak the same language with the management, which leads to conflict situations that complicate the work on the project. So, this article is devoted to facilitation techniques (from Latin: simplification or facilitation) of developers' communication with management or how to simply and clearly explain the basic terms of ML, thereby leading your project to success. If this topic is close to you, welcome to cat.

A esthete to the note: Sepuls, sepulcarius and sepulation are the terms used by the ingenious Stanislav Lem in the 14th journey of Iyon the Pacific.

Project start

The ML project should begin with the legitimization of the validation metric. Sounds scary, doesn't it? Let's start the explanation. Legitimization (in Russian from Latin it is legalization) is simply coming to an agreement of the parties, fixed in writing and endorsed - it is desirable, of course, also in writing. Parties are both the donor and the project management, as well as its executors.

Now let's move on to validation . The ML programmer usually has experience writing validation code, and when tracing, he sees true and false returned to him. But how to explain this concept to a manager who does not deal with code? Let's use this simple life example.

Imagine that you are passing a market and you see: peaches are sold. The seller tells you: “Bery! "Harosha parsik, fresh, juicy such, ne pityeesh." However, you take a closer look and see: in one place it is spoiled. You say: “Well, where is he good? this is rotten. ” The seller offers half price. If you think: “You can cut out the spoiled one, it’s only a quarter, it seems to be profitable” - and buy it, then validation takes place in the ML language and the peach (in the slang ML - sample) is considered valid. If you think that you can find in another place the best instead of the spoiled one, then disability occurs, and the peach is recognized by you as not valid.

It turns out that there is nothing complicated in validation, and we all do validation every day, recognizing one thing as good, suitable for ourselves or disability, recognizing something else as bad, unsuitable.

Note to Estete: Suddenly, Jourdain is surprised to learn that his whole life has been expressed in prose (s). Moliere, Tradesman in the nobility.

Finally, it remains only to explain what a validation metric is . Let's think about why we decided to buy a peach from the previous example?

  • it is cheap enough (price <threshold value)
  • it is quite ripe (ripeness> threshold value), but not ripe (ripeness below the 2nd threshold value)
  • it is of normal size, that is, its size is in the category of "normal" (all categories: too small, small, normal, large, huge)
  • it is not spoiled enough (the area of ​​rotten and spoiled areas is less than the threshold value)

All this, listed above, is an example of a validation metric consisting of 4 categories in this example. In the simplest case, when a peach satisfies all the criteria at once, it will be recognized as valid and purchased.

Now it becomes obvious why it is so important to agree from the very beginning, how exactly the validation will take place, on how many parameters and what threshold values ​​all interested parties will be satisfied with. Descriptions of actions in case of partial compliance with the conditions may occupy a special section.

Naturally, each ML project, depending on its subject area, will have its own validation metric. The document fixing the metric of validation is as important for the ML project as the constitution for the state.

Only after the document finally appeared in the project that regulates the validation metric and became available to all project participants, does it make sense to write its code. The validation code is the heart of the project and its quality must be impeccable, any mistake in this part with a high degree of probability can lead to the collapse of the entire ML project as a whole.

The mystery of calculating accuracy

The most important indicator of the current state of affairs in a project for management is accuracy . How can one simply explain to the manager what it is and what actions need to be performed in order to calculate it?

First we need to explain what a validated sample is. In our example, this is when we bought not a single peach, but a ton. We sit down or hire workers and they sort the peaches into 2 containers. The inscriptions on the containers: X (good) and P (bad). The work done by sorting peaches is the creation of a validated sample.

How to explain why a validated sample is needed? Imagine that you have a younger sister and want to teach her how to choose peaches. You take it to the market and say: "Learn, watch how I do." When it seems to you that she has already learned, you want to test her skills. How to do it? You create a control sample, i.e. you take from the containers, for example, 100 peaches that have already been sorted from each container and secretly stick on them secret stickers to know which container they were taken from, but your sister would not know this, and suggest that she independently lay them in new empty containers. The percentage of matches your sister’s election has with secret stickers is a measure of accuracy. In other words, accuracy is the objective value of how much sister can be trusted with your choice of peaches for you. 100% means that she is your poured copy and does everything exactly as you do. 0% - that her opinion is exactly the opposite of yours.

A note to Esthete: Yes, you’re right, over time, peaches can begin to deteriorate and you need to consider that their validity will have to be reviewed from time to time. And this also happens in computer data, for example, with such a characteristic as “relevance”.

Now let’s take a look at 4 ML performance indicators that can be confused. These are true-positive (TP), false-positive (FP), true-negative (TN) and false-negative (FN). The first half of the word means coincidence (true) or mismatch (false) of your sister’s opinion with a secret peach sticker. The second half simply means the container into which your sister threw the peach (X-good - positive, P-bad - negative). And two words together is just the number of peaches in that category.

In addition to accuracy, 3 auxiliary indicators are also used, these are precision (accuracy), recall (sensitivity) and f1_score.

Precision shows the% match with your opinion of peaches thrown into container X (good). 100% means that all peaches that you recognized as fit are recognized as such by your sister. A lower value means that those that are recognized as unfit have also got into container X. The indicator is important when it is critical for a business that worthless peaches do not fall into suitable ones, but if the suitable one is found to be erroneously unfit, then there is nothing to worry about.

Recall shows the relationship between correctly selected good peaches (TP) and the sum of this value with good peaches mistakenly considered unfit (TP + FN). 100% means that your sister never throws good peaches in a basket with bad ones and is the opposite of Precision. This indicator is important when it is necessary for a business to have suitable peaches as rarely as possible fall into a container that is unusable.

F1 score is a synthetic score that combines the benefits of precision and recall. Its great importance testifies to the balance of training and suggests that as good peaches do not fall into the basket with bad ones, so bad ones do not rush to good ones.

Note to Esthete: This indicator is the harmonic mean between precisions and recall and is calculated by the formula:

f1_score = 2*(recall*precision) / (recall + precision)

The question often arises: why does the ML project manager need to know and understand all these indicators so deeply. Answer: this is important for business. As a dairy farm manager, you need to know what milk yield is and by what formula they are considered, as a farm manager you need to know what yield is and how it is calculated. Yes, the manager may not delve into how the cows are milked, how they calve and how to treat them, but to understand the main business indicators of the project is the key to business success.


All of us, participants in ML projects, are doing a good and necessary job. Which of us, as a student, did not dream, sorting out potatoes, tomatoes and cabbage on a collective farm so that robots would do it for him, and not a person (s). We make the story come true and let our projects be successful. I will be glad if this article helps make a small contribution to the successful start of ML projects.

If this article seems useful to you, write in the comments and I will do the 2nd article on how to explain the additivity and generalization to management, these pillars of the correct, suitable ML project.

Also popular now: