How do we distribute orders between drivers in Yandex.Taxi

    image

    One of the main tasks in Yandex.Taxi is how to make sure that a car comes quickly to the user, while the driver has reduced the “idle run” time (that is, the time when he is on the line without a passenger). It would seem that everything is simple: the user chooses a tariff, indicates additional wishes (a child seat, for example). It remains to filter the drivers on the line according to these criteria, select the closest one and offer him an order. However, everything is so simple only at first glance.

    Today I will tell the Habr community about how we choose the most suitable driver and how this process has evolved over time. You will learn about two approaches to solving the problem.

    General search architecture


    When the user clicks the "Call a taxi" button, an order object is created in the back end and its processing begins in accordance with the state machine. In order for the order to go from the “Waiting” state to “Driver assigned”, you need to find the driver, offer him the order and wait for confirmation that the order has been accepted.

    image

    Greedy Approach


    For a very long time, a greedy approach worked in Yandex. Taxi . With this approach, at the stage of searching for an executor, a request is made to Microservice Tracker, which is responsible for drivers. Tracker knows everything about cars: from color and branding to the current location . In Tracker, there is a local geo-index for drivers and connectors to routing services (routers) for building routes from point A to point B (and even through points C, D, D). Therefore, when a request to search for a driver is received, Tracker first determines the nearest cars in the local geo-index by the direct radius taking into account “hard” ordering restrictions (car class, requirements - child seat, yellow numbers). Then, the time and length of the car’s delivery route is specified and, taking into account this information, the best option is chosen.

    Later, this logic evolved: for each driver they began to count his “scoring” to order - a function of the time the car was delivered. And the drivers were ranked according to the scoring value. The function takes into account not only the time of filing, but also many other factors: from the level of demand at points A and B to the driver’s “experience”. This greedy appointment is called bonus.

    Buffer (beam) approach


    However, with the greedy approach the nearest driver will receive the one who first ordered a taxi. However, some users may even be left without a car.

    image

    image

    With increased demand, when competition begins for performers, the greedy approach is no good. To best meet the demand even in the most loaded hours, we use a variety of approaches and algorithms. One of them is the buffer (beam) assignment of drivers to orders. It is based on the well-known combinatorial optimization problemthe assignment problem. Briefly, its essence: let us have N jobs and M performers, any employee can perform any task during p (i, j) [0 <= i <N, 0 <= j <M]. It is necessary to assign such a performer to each task in order to reduce the total time to complete all the work (at that, one performer can undertake only one work).

    imageimage

    When solving such an assignment problem, our “cost” of performing the work (order) by the executor (taxi driver and driver) is the value of the scoring function from the time the car was delivered to the user. The task can be described in terms of bipartite graphs: on the one hand, orders, on the other - performers. Between orders and performers there are weighted edges (scoring). Thus, one of our goals is to minimize the total time for car delivery, maximizing the number of completed orders (maximum matching). One of the most well-known ways to solve this problem is the Hungarian algorithm .
    imageimage

    Obviously, with a buffer assignment, we cannot give the driver upon request, as with the greedy approach. First you need to put the order in the queue, then play, and then report the found driver. This did not at all fit into the state machine for order processing, and it had to be slightly improved. To test and create a new solution without affecting our colleagues, we immediately agreed that we would do everything in a separate DriverDispatcher microservice. He will take orders, put himself in a queue, find drivers and save the results of the draws.

    First of all, we had to prepare Tracker for a new load profile. If, with a greedy approach, requests for drivers were simply individually placed on the Tracker balancer and redirected to its instances with load distribution, then in the buffer assignment all requests were from the same machine: individual requests would simply fill the connection pool. Therefore, we have added to the tracker the possibility of batch processing requests that were processed in parallel within the tracker. Along the way, we also had to solve the problem of a reasonable number of requests for batch processing. On the client side (DriverDispatcher), we broke a large batch into several small ones and sent them to different Tracker instances.

    image

    So, the tracker is prepared, scoring is considered both in Tracker'e (greedy assignment), and in the new service (DriverDispatcher'e), the algorithm for solving the assignment problem is debugged and works correctly. There was a question how to integrate all this into a state machine for order processing. We added sending and deleting order information in DriverDispatcher when the order is transferred from state to state. And it almost worked. Almost - because the iterations of the search artist to order not controlled from the outside. We could just replace the hike to the tracker for the driver to hike to our service and give the driver when he was found, and before that just give 404. But this is bad because we need to offer the order to the driver as soon as we found the order, and even a few seconds delay here play a role: the driver can simply turn in the wrong direction, and the order will become irrelevant. To do this, we made it possible to trigger the process of searching for the executor, without affecting the scheduled tasks. So we saved the search logic (with re-queries) and added the ability to call it outside the scheduler.

    Thus, we were able to combine the main state machine for order processing with the state machine for processing in the buffer control without affecting the working logic and without races between states. You can run the first experiments on live users.

    This is all very cool, but how about searching for an artist, you ask. If the search does not take place immediately after the receipt of the order, then the search time increases and is eventually compensated by a faster submission? This is not quite true: with the help of various techniques (including using machine learning), we were able to isolate cases when waiting makes sense, while in other cases the waiting time does not change.

    Prank draw


    Another way to find an artist faster is to start looking for him BEFORE creating an order. When a new pin appears (that is, the user only enters order data into the application), the machine learning algorithms estimate the probability that the order will follow and decide whether to take it into account when searching for drivers in a buffer. We can find the car in advance, and when the user clicks the order button - immediately make an offer to the right driver.

    Conclusion


    Matching orders and drivers is not an easy task, it requires taking into account many factors. One of them is the context of driver movements when selecting candidates for an order. We will tell about it in the following posts.

    Other posts about Taxi technology



    Also popular now: