What to think on the NALSD interview

    I described earlier a typical coding interview . In addition to coding, there is almost always a question about system design. (Large) System Design. In the case of interviews at SRE, this is even more interesting (as for me) the beast - NALSD. Non-abstract large system design. The main difference between SWE and SRE is precisely in these letters “NA”.

    What is the difference, and how to prepare for it? Let's take an example. As an example, take something very tangible, something that no one will ever ask for a real interview (google) :)

    For example - let's design a library. For paper books, the usual such. The entire text below was written in one sitting in about an hour, in order to roughly show what can be done and what is important to be done. So forgive me for the confusion, but I think so (and therefore I am).

    NALSD Question: Design a public library.


    First, we are interested in the characteristics of the load, or we make a reasonable assumption ourselves. Since this is a question about a scalable system, this is at least a city of one million people. It is worth thinking about options - one large building, or district libraries, plus storage. It seems to me that the latter is wiser. Especially, if not a city, but cities.

    So, let us focus on the current system on a single city (with some reservations, we can apply a similar mechanism a level higher for scaling to many cities). So, the city of one million. Let us round numbers for convenience of estimates - let it be 1 million probable readers. Readers are going to read at an arbitrary point in time independently of each other. So we can estimate as a simple Poisson process. But “on the run” it will be normal to count normally, so let's take another simplification step and take simply that 1% of readers will want to take a book a day. Total, for further calculations, we will take 10,000 booklets per day.

    Our task is to ensure the possibility of issuing 10,000 books a day (plus returning the same 10,000 books to some other day in the future, by the way) in a city of one million people. At this point, the question of a mono-library or a district library already disappears (by the way, so that all the million are potential readers, it’s necessary that they can get to the library within a reasonable time, otherwise the desire to take a book will definitely fall off by timeout). So, we need to evaluate the capacity of each local library. It is correct to do this taking into account population density and reachability, but since this does not greatly affect the system itself, for simplicity of calculations we will imagine that we place them evenly. But it still does not mean how we can divide them. Obviously, putting 10,000 libraries with one book evenly throughout the city does not make sense, so you need to understand what makes sense. We leave to the level below.

    So, one library. In order for it to make sense, most of the visitors must find the book they need. This means that we will need to keep records and forecasts of the most popular requests and keep these books ready. Then, we need to keep the assortment in principle. Offhand, I would say that the local library should have at least 1000 names, the most tops of them in a set of instances, top more, tail less. The average book in our library reads from 3 days to 2 weeks (in fact, the characteristic depends on the book, we need a separate analysis here), we take an equal weekly average and move on. That is, the book taken is absent for about a week, so the top books should be left for a week (then they will begin to recover from returns).

    We take the average coefficient of inflation in 6. So the storage capacity starts from 6000 books. Less means that this is only a small top, which can still help in some cases (for example, an island with “twilight” in a supermarket near the children's playroom), but we will leave it outside for now.

    In the “equilibrium” state, they return and take approximately equal shares per day, plus or minus the spread, but we also need the opportunity to accept an increased number of peak returns (for example, due to external synchronization of the type of holidays or a change of fashion). Correct - to model. But here and now we estimate the buffer as a third. In total, we support 6000 books available for issue, plus space for another 2000 in reserve.

    So, we need a unit capable of storing 8,000 books. Daily replenishment is very expensive, it means for a week or two. Two weeks, 6000 books, with skewed returns, this is about 300 per day. At the time of discovery, we can score all 8,000 books to create the initial 2,000 in circulation before the first returns. 2000 for 3 days = about 600 books per day, plus buffer = 800 books per day.

    Let us estimate the bandwidth and storage limits. One book occupies an average of 2 centimeters of linear space, 8000 books - 160 meters. turn 4 times vertically, 40 meters. We divide into even 5-meter racks, we get 8 racks of 4 shelves 5 meters long. One librarian (or a robo-librarian) will be able to work with two shelves, retrieving one book will take up to 5 seconds to reach it, 5 seconds to get or put a book, 5 seconds reverse, a total of 15 seconds. 4 librarians will provide us with approximately 15 books per minute maximum, that is, 900 books per hour from the vault approximately.

    We add time to process the request (10s), search (5s), add to the reception and issue system (2s) => 400 books per hour. It means that the storage in peak can produce-accept 400 books per hour, therefore 800 books per day are achievable in 2 working hours.

    Now we consider other characteristics: in order to issue 400 books per hour by 4 people, it is necessary to place 100 people in the waiting room in a queue in front of the processing window, and so that these people do not create crowds at the entrance-exit. That is, the entrance group should pass 400 people per hour in both directions. It turns out that the main limiter will not even be storage, but the capacity of the hall and the entrance group.

    So, it will be possible to find the optimal ratio of storage and hall with more accurate calculations.

    So, we dealt with the unit, we return to the level above. We estimated the total load on the library for about 10,000 books a day, one unit we calculated for 800 books a day, that is, we need 12.5 units. When geo-distributing around the city, one or two alternative units (at the city boundaries) or even 3-4 (inside) will be achievable to each reader, which allows a little to smooth the peaks of traffic and / or increased demand for specific positions.

    We also know that at any time any library can be closed (fire, sanitary.inspection, painting the handles of refrigerators or something), with an increase in their number, the probability of coincidence of loss of two of life increases, so we need a spare unit for every 5-6 units. In total, 15 units must provide the required performance, while maintaining the estimated stock in their warehouses.

    To maintain the estimated stock, we will need to update once a week or two (we thought above that two) about half the range to follow the trends and so on. So, it is necessary to supply and export 4000 books every two weeks to each unit. With each import and export, these same 4000 books should be removed from the store and then put others there. At 400 books per hour, updating the range will take 10 hours under maximum load. That, it seems, is still not so bad, again, during mass loading, many things will go faster, however, maintaining the range takes 5 times more than working with the public.

    The average book (2cm * 20cm * 30cm) is approximately 1.5 liters, that is, 4000 books = 6 cubic meters. Easily fit into one gazelle. The weight of a cubic meter of paper is 600 kg, that is, 6 cubic meters is 3.6 tons. The carrying capacity of a gazelle is one and a half tons, so three gazelles will be needed. Plus one backup. We have 15 units, updating every 2 weeks, with a uniform distribution, we are at the limit, we will have to add another gazelle.

    And we did not have time to think over where and from where these gazelles carry, so that suppliers' warehouses and unloading points of outdated books appear on the diagram ...

    So time is up. What is so abnormal in the NALSD question? The scalability it is bound to be in any Large System Design. The main thing is concreteness .

    Above, I made many assumptions and assumptions, but all subsequent estimates were based on previous ones. For numbers, I also tried to give “how to evaluate correctly”, however, it is very easy to forget to do it, you get tired and forget. It is still very easy to stick with a digit from memory, without explanation ... But since the design is specific, if any of the assumptions turns out to be wrong, you can correct it and simply recalculate the subsequent ones.

    As I remember, I took an IOPS disk of 600 in my interview when estimating it, simply because I recently optimized and toiled with one array, where _massiv gave 600 IOPS ... The interviewer asked me a little surprise then - 600 IOPS per disc? : D

    During the interview, the interviewee can correct any of your assumptions. Or add any restriction (about which you did not know, did not think, did not ask, or simply change the TK on the fly). In this case, since you operate solely on specific numbers, it will be just a trivial update of numbers.

    If the correction of the assumption causes a system redesign - this is either a design error, or an incorrect adjustment, or going beyond the limits of the system's applicability, which is also not a rare situation in real life. It is important not to miss such moments, and to evaluate the obtained figures for reality both during the design stage and for any adjustments.

    As a SRE, we are obliged to think in terms of scaling real systems under the conditions of real limitations of real iron. A fine algorithm may well exist., exchanging huge time costs for a small amount of memory ... But still, a petabyte of RAM for one processor core is not yet installed in real conditions. So if we have a petabyte of RAM, then we have ten thousand processors at least. Or twenty. Or thirty. And we must look for the optimum is not global, but here and now, in these conditions.

    You do not need to remember the exact numbers, but you must have some idea of ​​the order: the same IOPS, which are about a hundred in HDD and hundreds of thousands in SSD. But these hundreds of thousands come together with the ratio of the cost of a terabyte of HDD to the cost of a terabyte of SSD as one to three or four. And that's not counting the strapping - a place in the rack, blades for them, ports on the switches and other things that cease to be pennies when the account goes to petabytes.

    Now sit back in your chair a little, relax, and try to design a supply system for fresh chicken eggs by subscription.

    If there is a desire to share with English-speaking colleagues, there is an option in English (as well as on the Anglograme ).

    Also popular now: