Caching and memcached
To begin with, about the name of the series of posts: the posts will be about caching on the Web (in highly loaded Web projects), and about the use of memcached for caching, and about other uses of memcached in Web projects. That is, all three constituent names in various combinations will be covered in this series of posts.
Caching today is an integral part of any Web project, not necessarily heavily loaded. For each resource, a critical feature for a user is a server response time. Increasing server response time leads to an outflow of visitors. Therefore, it is necessary to minimize the response time: for this, it is necessary to reduce the time required to formulate a response to the user, and the response to the user requires to receive data from some external resources (backend). These resources can be both databases and any other relatively slow data sources (for example, a remote file server on which we specify the amount of free space). To generate one page of a rather complex resource, we may need to make dozens of such calls. Many of them will be fast: 20 ms or less, however, there is always a small number of queries, the calculation time of which can be calculated in seconds or minutes (even in the most optimized system there can be one, although their number should be minimal). If we add up all the time that we spend waiting for the results of the queries (if we execute the queries in parallel, then we take the time to calculate the longest query), we get an unsatisfactory response time.
The solution to this problem is caching: we put the result of the calculations in some storage (for example, memcached), which has excellent information access time characteristics. Now, instead of accessing slow, complex and heavy backends, it’s enough for us to execute a request for a fast cache.
Memcached and Caching
Principle of locality
We find a cache or caching approach everywhere in electronic devices, software architecture: CPU cache (first and second level), hard disk buffers, operating system cache, buffer in the car radio. What determines such caching success? The answer lies in the principle of locality: for a program, a device, it is common for a certain period of time to work with a certain subset of data from a common set. In the case of RAM, this means that if the program works with data located at address 100, then with a greater degree of probability the next call will be at address 101, 102, etc., and not at address 10000, for example. The same is with a hard drive: its buffer is filled with data from areas adjacent to the last read sectors, if our programs were working at one point in time, not with some relatively small set of files, but with all the contents of the hard drive, buffers would be meaningless. The car radio buffer pre-empts the next minutes of music from the disc, because we are likely to listen to the music file sequentially than skip over the set of music, etc.
In the case of web-projects, caching success is determined by the fact that the site always has the most popular pages, some data is used on all or almost all pages, that is, there are some samples that are requested much more often than others. We replace several calls to the backend with one call to build a cache, and then all subsequent calls will be made through a fast-running cache.
The cache is always better than the original data source: the CPU cache is orders of magnitude faster than RAM, but we cannot make RAM as fast as the cache — it is economically inefficient and technically difficult. The hard disk buffer satisfies requests for data orders of magnitude faster than the hard disk itself, but the buffer does not have the ability to store data when the power is turned off - in this sense it is worse than the device itself. A similar situation is with caching in the Web: the cache is faster and more efficient than the backend, however, it usually cannot save data in the event of a server restart or crash, and it does not have logic to calculate any results: it can only return what we put in it earlier.
Memcached is a huge hash table in RAM, accessible via the network protocol. It provides a service for storing values associated with keys. We access the hash through a simple network protocol, the client can be a program written in an arbitrary programming language (there are clients for C / C ++, PHP, Perl, Java, etc.).
The simplest operations are to get the value of the specified key (get), set the value of the key (set) and delete the key (del). To implement a chain of atomic operations (subject to concurrent access to memcached from parallel processes), additional operations are used: increment / decrement of the key value (incr / decr), append data to the key value to the beginning or the end (append / prepend), atomic bundle getting / setting values (gets / cas) and others.
Memcached was implemented by Brad Fitzpatrick as part of the LiveJournal project. It was used to offload the database from queries when returning page content. Today memcached has found its application in the core of many large projects, for example, Wikipedia, YouTube, Facebook and others.
General caching scheme
In general, the caching scheme is as follows: the frontend (the part of the project that forms the response to the user) needs to get some sort of data. Frontend accesses the fast as a cheetah memcached server for a fetch cache (get request). If the corresponding key is found, the work ends. Otherwise, an appeal should be made to a heavy, slow, but powerful (like an elephant) backend, which most often acts as a database. The result is immediately written to memcached as a cache (set request). In this case, the key is usually set to the maximum lifetime (expiration date), which corresponds to the moment of cache reset.
This standard caching scheme is always implemented. Instead of memcached, some projects may use local files, other storage methods (another database, PHP accelerator cache, etc.) However, as will be shown later, in a highly loaded project this scheme may not work in the most efficient way. Nevertheless, in our future story we will rely on this very scheme.
How is memcached arranged? How does he manage to work so fast that even dozens of memcached requests to process one page of the site do not lead to a significant delay. Moreover, memcached is extremely undemanding to computing resources: on a loaded installation, the processor time used by it rarely exceeds 10%.
Firstly, memcached is designed so that all of its operations have algorithmic complexity O (1), i.e. the execution time of any operation does not depend on the number of keys memcached stores. This means that some operations (or capabilities) will be absent in it if their implementation requires only linear (O (n)) time. So, memcached lacks the ability to combine keys “into folders”, ie any grouping of keys, also we will not find group operations on keys or their values.
The main optimized operations are the allocation / deallocation of memory blocks for storing keys, determining the policy of the most unused keys (LRU) to clear the cache when there is not enough memory. The search for keys occurs through hashing, therefore, it has complexity O (1).
Asynchronous I / O is used, threads are not used, which provides an additional performance boost and lower resource requirements. In fact, memcached can use threads, but this is only necessary to use all the cores or processors available on the server in case of too much load - a thread is not created for each connection in any case.
In fact, we can say that the response time of the memcached server is determined only by network costs and is almost equal to the time it took to send the packet from the frontend to the memcached server (RTT). Such characteristics make it possible to use memcached in highly loaded web-projects for solving various problems, including data caching.
Memcached is not a reliable storage - a situation is possible when the key is deleted from the cache before its expiration. The architecture of the project must be prepared for such a situation and must flexibly respond to the loss of keys. Three main causes of key loss can be identified:
- The key was deleted earlier than its expiration date due to lack of memory for storing the values of other keys. Memcached uses the LRU policy, so this loss means that this key was rarely used and cache memory is freed up to store more popular keys.
- The key has been deleted since its lifetime has expired. Strictly speaking, this situation is not a loss, since we ourselves limited the key lifetime, but for a client with respect to memcached code, such a loss is indistinguishable from other cases - when accessing memcached, we get the answer “there is no such key”.
- The most unpleasant situation is the collapse of the memcached process or the server on which it is located. In this situation, we lose all the keys that were stored in the cache. Cluster organization allows a little mitigation of the consequences: a lot of memcached servers, on which the project keys are "spread": so the consequences of the failure of one cache will be less noticeable.
All the situations described must be kept in mind when developing software working with memcached. You can separate the data that we store in memcached according to the degree of criticality of their loss.
"You can lose . " This category includes caches of database samples. The loss of such keys is not so bad, because we can easily restore their values by contacting the backend again. However, frequent cache losses lead to excessive calls to the database.
"I would not want to lose . " Here you can mention the counters of site visitors, resource views, etc. Although it is sometimes directly impossible to restore these values directly, the values of these keys have a time-limited meaning: after a few minutes their value is no longer relevant, and a new value will be calculated.
“They should not lose at all . ” Memcached is convenient for storing user sessions - all sessions are equally accessible from all servers in the frontend cluster. So I would never want to lose the contents of the sessions - otherwise, the users on the site will be “logged out”. How to try to avoid? You can duplicate session keys on multiple memcached servers from the cluster, so the likelihood of loss is reduced.