Postsharp. We solve the caching problem

Original author: Matthew Growes
  • Transfer
Sometimes there are situations in which there is no way to speed up the operation of a certain operation. It may depend on some service located on an external web server, or it may be an operation that gives a high load to the processor. Or it can be quick operations, however, their parallel operation can suck all performance resources from your computer. There are many reasons to use caching. It should be noted that PostSharp, initially does not provide solutions for you of any caching framework, it just allows you to make this task orders of magnitude faster, without any tedious actions, such as arranging the code responsible for caching all over the program source code. It allows you to solve this problem elegantly, placing tasks in classes and allowing them to be reused.



Suppose I want to find out on the site of a car dealership how much are the cars that are for sale at this car dealership. To do this, I will use an application that will download from the server the price list of the cabin, which is designed for cars of a certain make, model and year of manufacture. If the values ​​of the price list (as part of our example) change too often, I will use the web service to get the values ​​of this price list. Let the web service be too slow, but I want to request too many cars. As you know, I can’t make someone else’s web service faster, but I can cache the returned data from the store, thus reducing the number of requests.
Since one of the main features of PostSharp is "interception" of a method call, i.e. implementation into the method in such a way that we can execute our code both before and after the work of the method body, we will use this framework to implement the caching task:
[Serializable]
public class CacheAttribute : MethodInterceptionAspect
{
    [NonSerialized]
    private static readonly ICache _cache;
    private string _methodName;

    static CacheAttribute()
    {
        if(!PostSharpEnvironment.IsPostSharpRunning)
        {
            // one minute cache
            _cache = new StaticMemoryCache(new TimeSpan(0, 1, 0));
            // use an IoC container/service locator here in practice
        }
    }

    public override void CompileTimeInitialize(MethodBase method, AspectInfo aspectInfo)
    {
        _methodName = method.Name;
    }

    public override void OnInvoke(MethodInterceptionArgs args)
    {
        var key = BuildCacheKey(args.Arguments);
        if (_cache[key] != null)
        {
            args.ReturnValue = _cache[key];
        }
        else
        {
            var returnVal = args.Invoke(args.Arguments);
            args.ReturnValue = returnVal;
            _cache[key] = returnVal;
        }
    }

    private string BuildCacheKey(Arguments arguments)
    {
        var sb = new StringBuilder();
        sb.Append(_methodName);
        foreach (var argument in arguments.ToArray())
        {
            sb.Append(argument == null ? "_" : argument.ToString());
        }
        return sb.ToString();
    }
}

* This source code was highlighted with Source Code Highlighter
.


I save the method name at compile time and initialize the cache service in run-time. As a key for caching, I will use the name of the method, as well as the values ​​of all parameters of the method, listed with a space (see the code of the BuildCacheKey method), which will be unique for each method and each set of parameters. In the OnInvoke method, I check if the received key exists in the cache, and use the value from the cache if the key already exists. Otherwise, I call the code of the original method to cache the result of work until the next call.
In my example, there is a GetCarValue method, which is designed to simulate a web service call to get information about a car. This method has parameters that can take a variety of values, therefore it can return different results each time it is called (in our example, only in cases where there is no cached value):
[Cache]
public decimal GetCarValue(int year, CarMakeAndModel carType)
{
    // simulate web service time
    Thread.Sleep(_msToSleep);

    int yearsOld = Math.Abs(DateTime.Now.Year - year);
    int randomAmount = (new Random()).Next(0, 1000);
    int calculatedValue = baselineValue - (yearDiscount*yearsOld) + randomAmount;
    return calculatedValue;
}

* This source code was highlighted with Source Code Highlighter
.

A few notes about this aspect:
  • I could also use OnMethodBoundaryAspect instead of MethodInterceptionAspect: both approaches would be correct. Just in this case, I chose MethodInterceptionAspect to simplify my choice, covering the requirements for the program
  • Remember that since it makes no sense to load and initialize the cache while PostSharp is running (not while the application is running), we must check whether PostSharp is running or not. Another way to load dependencies is to put the code in RuntimeInitialize.
  • This aspect does not make it possible to use the 'out' and 'ref' parameters in caching tasks. Of course, this can be done, but it seems to me that the 'out' and 'ref' parameters should not be used in such tasks, and if you agree with me, let's not waste time on their implementation.

Checks at compilation time


There are always options when caching is not a good idea. For example, when a method returns Stream, IEnumerable, IQueryable, etc.
interfaces. Therefore, such values ​​cannot be cached. To do this, you must override the CompileTimeValidate method, for example, like this:
public override bool CompileTimeValidate(MethodBase method)
{
    var methodInfo = method as MethodInfo;
    if(methodInfo != null)
    {
        var returnType = methodInfo.ReturnType;
        if(IsDisallowedCacheReturnType(returnType))
        {
            Message.Write(SeverityType.Error, "998",
             "Methods with return type {0} cannot be cached in {1}.{2}",
             returnType.Name, _className, _methodName);
            return false;
        }
    }
    return true;
}

private static readonly IList DisallowedTypes = new List
             {
                 typeof (Stream),
                 typeof (IEnumerable),
                 typeof (IQueryable)
             };
private static bool IsDisallowedCacheReturnType(Type returnType)
{
    return DisallowedTypes.Any(t => t.IsAssignableFrom(returnType));
}

* This source code was highlighted with Source Code Highlighter
.


Thus, if any developer tries to apply caching to methods that should not be cached, he will receive a compilation error message. By the way, if you use IsAssignableFrom for some type, you also cover classes and interfaces that will get enough of it. Those. in our case, such types as FileStream, IEnumerable, etc. will also be covered.

Multitasking


Great, at this point we already have a great solution to add caching to all the methods that need it. However, have you thought about one potential problem hidden in this caching aspect? In multitasking applications (such as a website), caching does an excellent job of its job, because after the first "user" accesses the cache, each subsequent "user" draws all the creams from using the cache access speed. However, what happens when two users try to get the same information at the same time? In the caching aspect that we just developed, this will mean that both users will at least calculate the same cache value. For a car shop website, this is not very relevant, however, if you have a web server with a load of hundreds or thousands of visitors requesting the same information at the same time, the caching task becomes very important. If they all make requests at the same time, our cache will constantly calculate the same values.

A simple solution to this problem would be to use “lock” every time the cache is used. However, locking is an expensive and slow operation and it is better if we first check for the existence of a key in the cache, and only after that block it. However, in this case, there is a possibility that several threads can simultaneously check for the absence of a key in the cache and go to calculate this key, so we must check for the existence of the key twice ( double-checked locking ), outside the locked code and inside it:
[Serializable]
public class CacheAttribute : MethodInterceptionAspect
{
    [NonSerialized] private object syncRoot;

    public override void RuntimeInitialize(MethodBase method)
    {
        syncRoot = new object();
    }

    public override void OnInvoke(MethodInterceptionArgs args)
    {
        var key = BuildCacheKey(args.Arguments);
        if (_cache[key] != null)
        {
            args.ReturnValue = _cache[key];
        }
        else
        {
            lock (syncRoot)
            {
                if (_cache[key] == null)
                {
                    var returnVal = args.Invoke(args.Arguments);
                    args.ReturnValue = returnVal;
                    _cache[key] = returnVal;
                }
                else
                {
                    args.ReturnValue = _cache[key];
                }
            }
        }
    }
}

* This source code was highlighted with Source Code Highlighter
.


It looks a bit repetitive, but in this case, this is a great solution to improve performance for high-load solutions. Instead of blocking the cache, I am blocking some private object that is specific only to the method to which the aspect is applied. All this minimizes the number of locks when using the cache.
Hope you are not confused? The problems of parallel execution of many tasks can be confusing, but in many applications this is a reality. Armed with this aspect, you no longer have to worry about your own mistakes or the mistakes of developers with a small amount of experience. Or about the mistakes of new developers or developers with 30 years of development experience on COBOL and seeing C # for the first time :). In fact, they need to know how to frame methods with the Cache aspect, and they don’t need to know how this technology should be implemented. And they do not need to know how to make their methods thread safe. They will be able to concentrate only on their own piece of code, without being distracted by the implementation of related tasks.

References:

Also popular now: