Priority Lock in .NET

Each programmer who uses more than one thread in his program has encountered synchronization primitives. In the .NET context, there are a lot of them, I will not list them, MSDN has already done this for me .

I had to use many of these primitives, and they perfectly helped to cope with the tasks. But in this article I want to talk about the usual lock in the desktop application and how the new (at least for me) primitive appeared, which can be called PriorityLock.

Problem


When developing a highly loaded multithreaded application, a manager appears somewhere that processes countless threads. So it was with me. And this manager worked, processed tons of requests from many hundreds of threads. And everything was fine with him, but inside the usual lock worked.

And then one day a user (for example, I) clicks a button in the application interface, the stream flies to the manager (not the UI stream of course) and expects to see a super friendly reception, and instead, he is met by Aunt Klava from the most dense reception desk of the most dense clinic with the words “I don’t give a damn who directed you. I have 950 more like you. Go and get to them. I don’t care how you figure it out. ” This is how lock works in .NET. And everything seems to be fine, everything will be executed correctly, but the user clearly did not plan to wait a few seconds for a response to his action.

This is where the heartbreaking story ends and the solution to the technical problem begins.

Decision


Having studied the standard primitives, I did not find a suitable option. Therefore, I decided to write my lock, which would have a standard and high priority entry. By the way, after writing, I studied nuget, too, I didn’t find anything like it there, although I might have searched poorly.

To write such a primitive (or no longer a primitive) I needed SemaphoreSlim, SpinWait and Interlocked operations. In the spoiler, I cited the first version of my PriorityLock (only synchronous code, but it is the most important), and explanations for it.

Hidden text
In terms of synchronization, there are no discoveries, while someone is in the lock, others can not enter. If a high priority has come, it is pushed forward by all those waiting for low priority.

The LockMgr class, it is proposed to work with it in your code. It is he who is the very object of synchronization. Creates Locker and HighLocker objects, contains semaphores, SpinWait's, counters wishing to get into the critical section, the current thread and recursion counter.

public class LockMgr
{
    internal int HighCount;
    internal int LowCount;
    internal Thread CurThread;
    internal int RecursionCount;
    internal readonly SemaphoreSlim Low = new SemaphoreSlim(1);
    internal readonly SemaphoreSlim High = new SemaphoreSlim(1);
    internal SpinWait LowSpin = new SpinWait();
    internal SpinWait HighSpin = new SpinWait();
    public Locker HighLock()
    {
        return new HighLocker(this);
    }
    public Locker Lock(bool high = false)
    {
        return new Locker(this, high);
    }
}

The Locker class implements the IDisposable interface. To implement recursion when capturing a lock, we remember the Id of the stream, then check it. Further, depending on the priority, in the case of a high priority, we immediately say that we came (increase the HighCount counter), get the High semaphore, and wait (if necessary) to release the lock from the low priority, after which we are ready to get the lock. In the case of a low priority, the Low semaphore gets, then we wait for the completion of all high priority flows, and, taking the High semaphore for a while, increase the LowCount.

It is worth mentioning that the meaning of HighCount and LowCount is different, HighCount displays the number of priority threads that came to the lock, when LowCount just means that the thread (one single) with low priority went into the lock.

public class Locker : IDisposable
{
    private readonly bool _isHigh;
    private LockMgr _mgr;
    public Locker(LockMgr mgr, bool isHigh = false)
    {
        _isHigh = isHigh;
        _mgr = mgr;
        if (mgr.CurThread == Thread.CurrentThread)
        {
            mgr.RecursionCount++;
            return;
        }
        if (_isHigh)
        {
            Interlocked.Increment(ref mgr.HighCount);
            mgr.High.Wait();
            while (Interlocked.CompareExchange(ref mgr.LowCount, 0, 0) != 0)
                mgr.HighSpin.SpinOnce();
        }
        else
        {
            mgr.Low.Wait();
            while (Interlocked.CompareExchange(ref mgr.HighCount, 0, 0) != 0)
                mgr.LowSpin.SpinOnce();
            try
            {
                mgr.High.Wait();
                Interlocked.Increment(ref mgr.LowCount);
            }
            finally
            {
                mgr.High.Release();
            }
        }
        mgr.CurThread = Thread.CurrentThread;
    }
    public void Dispose()
    {
        if (_mgr.RecursionCount > 0)
        {
            _mgr.RecursionCount--;
            _mgr = null;
            return;
        }
        _mgr.RecursionCount = 0;
        _mgr.CurThread = null;
        if (_isHigh)
        {
            _mgr.High.Release();
            Interlocked.Decrement(ref _mgr.HighCount);
        }
        else
        {
            _mgr.Low.Release();
            Interlocked.Decrement(ref _mgr.LowCount);
        }
        _mgr = null;
    }
}
public class HighLocker : Locker
{
    public HighLocker(LockMgr mgr) : base(mgr, true)
    { }
}


Using the LockMgr class object was very concise. The example clearly shows the possibility of reusing _lockMgr inside the critical section, while the priority is no longer important.

private PriorityLock.LockMgr _lockMgr = new PriorityLock.LockMgr();
public void LowPriority()
{
  using (_lockMgr.Lock())
  {
    using (_lockMgr.HighLock())
    {
      // your code
    }
  }
}
public void HighPriority()
{
  using (_lockMgr.HighLock())
  {
    using (_lockMgr.Lock())
    {
      // your code
    }
  }
}

So I solved my problem. Processing user actions began to be performed with high priority, no one was hurt, everyone won.

Asynchrony


Since the objects of the SemaphoreSlim class support asynchronous waiting, I also added this opportunity to myself. The code differs minimally and at the end of the article I will provide a link to the source code.

It is important to note here that Task is not attached to the thread in any way, therefore, asynchronous reuse of the lock cannot be implemented in a similar way. Moreover, the Task.CurrentId property as described by MSDN does not guarantee anything. This is where my options ended.

In search of a solution, I came across the NeoSmart.AsyncLock project , the description of which indicated support for reusing asynchronous lock. Technically, reuse works. But unfortunately the lock itself is not a lock. Be careful if you use this package, be aware that it does NOT work correctly!

Conclusion


The result is a class that supports synchronous operations with reuse, and asynchronous operations without reuse. Asynchronous and synchronous operations can be used side by side, but cannot be used together! All due to the lack of support for reusing the asynchronous option.

I hope I am not alone in such problems and my solution will be useful to someone. I posted the library on github and nuget.

There are tests in the repository that show the health of PriorityLock. On the asynchronous part of this test, NeoSmart.AsyncLock was tested, and the test failed.

Link to nuget
Link to github

Also popular now: