
Caching on Windows Azure
- Tutorial
The caching problem confronts any heavily loaded application. In Windows Azure, where the main algorithm for increasing productivity is to add application instances, the role of the cache becomes even more important because with its help it is possible to provide a "shared memory" for all instances.
In general, this class is not related to Azure, but it is simply impossible not to mention it in the article on caching.
Starting with .Net 4, the new System.Runtime.Caching namespace has appeared. It is engaged in MemoryCache, as the name implies, creating a repository of objects in memory. For me, it is important because it works much faster than Cache from the System.Web.Caching space. Another nice feature is that you can create multiple caches with different settings.
For ease of operation, you can immediately describe the cache settings in the configuration and then access it through MemoryCache.Default. Other caches need to be initialized forcibly and store links to them. Parameters can be set both in runtime and in configuration:
As a rule, each application in the cloud has several instances: on the one hand, it provides fault tolerance (SLA Azure, in principle, requires at least two instances) and scalability with increasing load falling.
Regularly there is a desire to provide a single "memory" for all instances. The simplest example would be a session. To do this, Azure has an In-Role Cache mechanism that allows you to allocate part or all of the memory of an instance to the cache, and it will be available from all application instances.
To use it, you need to add the Windows Azure Caching package to the solution and configure the roles to work with it.
First we need to enable the cache on the instances or add the instances allocated for the cache to the solution.
There will always be a cache named “default”. We can add caches with their names and various settings.
High Availability
Requires that the solution has at least two instances on which the cache is located. Cached data will be stored on at least two copies and failure of one instance will not result in data loss. Use this option carefully, as the cache begins to absorb a lot more resources.
Notifications
Enable / disable the notification mechanism for the cache. They literally just appeared and real scenarios for them, I have not yet come up with.
Eviction policy
How the cache will be cleared in case of overflow. For inclusion, only one option is available so far - LRU (Least recent use). Those. which object was least accessed - it will be deleted.
Expiration Type and Time To Live
Two interrelated parameters indicate how and for how long (in minutes) they will expire in the cache and disappear from it. Those. if the cleaning parameter is more emergency (the cache overflow situation usually doesn’t lead to anything good), then obsolescence allows us to describe how objects should disappear from the cache during normal operation.
None Objects will be cached forever (until reboot). Requires lifetime to be set to zero.
Absolute The object is stored in the cache a certain time after it got there.
Sliding window.My favorite option. The object will disappear from the cache after the specified time after the last access. Those. objects that are accessed will constantly live in the cache.
In general, everything is quite simple: in the configuration file we describe what caches we have and where we are located. Insert the following lines into the configuration file in the configuration section (their template should already have been created by NuGet when installing the cache package).
As an identifier, you must specify the name of the role containing the cache in the project. In our case, CacheWorkerRole. Not the name of the access point in Azure, such as mycoolapp.cloudapp.net.
Explanation, probably, requires only the localCache tag, which indicates that the instance can store objects locally and the principle of storage. objectCount determines how many objects we will store locally, when the specified number is reached, the cache will remove from the local copy 20% of the objects that have not been accessed the longest.
Time synchronization (TimeBased) indicates that the objects will be stored in the local cache specified in ttlValue the number of seconds. NotificationBased synchronization requires the notification mechanism to be enabled in the cache. In this case, the ttlValue parameter indicates how often the local cache will check for changes in the cache.
Basic settings completed. Now, for an example, we will connect sessions of our web role to a cache. IIS does this very simply by replacing the standard InProc session provider with a cache provider.
A session is a great example of where you can use the cache. But you should not be limited only to her. For example, you can put pages in a cache, then other instances do not have to waste time building them. And of course, we can and should add our own data to the cache.
Before moving on to examples of working with a cache from code, let us dwell on a completely new type of cache.
While the cache as a service is available in preview mode (preview). It may be in demand in scenarios where different solutions must have access to the same data. In-Role cache is only available within the solution to which it is tied. The cache service has no such drawback.
Setting up the cache service one-on-one is the same as setting up the In-Role cache, but you need to conduct it not in the studio, but on the Azure management portal. A cache access key will be added to the role configuration.

Other advantages of the cache service include:
- a slightly lower price;
- lack of headache during deployment updates (they will not affect the cache);
- support for the memcached protocol, which allows you to connect not only PaaS solutions to it, but also any type of virtual machine.
Creating a cache is worth knowing how many objects, how much will be stored in it, and with what frequency they will be read and written. When these values are known, you need to drive the data into one of the excel plates (for service or roles ) and get recommendations on how much and what you need in terms of tariffed units.
Objects are stored in a cache in serialized form, therefore, to determine the volume of an object, you need to get the size after serializing the object itself and its key.
The size of the object in the cache after serialization is limited to 8 megabytes. If you go beyond the limit, you can enable compression for the cache, then objects will be compressed before they get into the cache. In general, compression can have a positive effect on cache performance if packing and unpacking costs are less than network transmission costs. Unfortunately, this can only be determined experimentally.
To increase cache throughput, you can increase the number of connections to it with the maxConnectionsToServer parameter. By default, only one cache connection is created.
Everything necessary for working with the cache lives in Microsoft.ApplicationServer.Caching.
First, create a cache object, and with the Add, Put, Get, and Remove commands, start working with the data.
To prevent the race, you should use GetAndLock, PutAndUnlock and Unlock. The GetAndLock statement does not block a regular Get and does not interfere with dirty reading.
It is not difficult to imagine a scenario in which you need to perform some action when changing an object in the cache. You can get the object and compare it with the current value, but you can reduce the cache load using the GetIfNewer method.
If the object appeared in the cache somewhere else, you can get its version from the DataCacheItem object.
To group objects, you can use regions. In each of the above methods, the second argument can pass the name of the region (after creating it) in which you want to create an object. And after that it’s easy to iterate over objects in the cache.
Regions have a couple of features that should be considered when working with them.
Do not iterate over objects in the region directly as in the code above. If another thread adds or removes an object in it, we will run into the exception "collection has changed."
The second feature: the region lives within one instance. Those. if the distribution of objects by region is uneven, then a situation may arise when one of the copies is idle and the other is boiling under load.
- The size of the object in the cache is limited to 8 megabytes
- If regions are used, they should be filled evenly
- Cache objects locally whenever possible.
- Enable high availability only where you need it.
- Use locks (GetAndLock) only where necessary
- Do not read objects if they are not updated (use GetIfNewer)
I hope this article helps others walk through fewer rakes than I collected. Good luck with your development, let your applications be fast.
Google plus
Memory cache
In general, this class is not related to Azure, but it is simply impossible not to mention it in the article on caching.
Starting with .Net 4, the new System.Runtime.Caching namespace has appeared. It is engaged in MemoryCache, as the name implies, creating a repository of objects in memory. For me, it is important because it works much faster than Cache from the System.Web.Caching space. Another nice feature is that you can create multiple caches with different settings.
For ease of operation, you can immediately describe the cache settings in the configuration and then access it through MemoryCache.Default. Other caches need to be initialized forcibly and store links to them. Parameters can be set both in runtime and in configuration:
In-role cache
As a rule, each application in the cloud has several instances: on the one hand, it provides fault tolerance (SLA Azure, in principle, requires at least two instances) and scalability with increasing load falling.
Regularly there is a desire to provide a single "memory" for all instances. The simplest example would be a session. To do this, Azure has an In-Role Cache mechanism that allows you to allocate part or all of the memory of an instance to the cache, and it will be available from all application instances.
To use it, you need to add the Windows Azure Caching package to the solution and configure the roles to work with it.
First we need to enable the cache on the instances or add the instances allocated for the cache to the solution.
Note : the cache is not supported on very small (extra small) instances.
Pictures about enabling cache
Enabling Cache on an Existing Role

Creating a Cached Role



Creating a Cached Role


There will always be a cache named “default”. We can add caches with their names and various settings.
Cache settings
High Availability
Requires that the solution has at least two instances on which the cache is located. Cached data will be stored on at least two copies and failure of one instance will not result in data loss. Use this option carefully, as the cache begins to absorb a lot more resources.
Notifications
Enable / disable the notification mechanism for the cache. They literally just appeared and real scenarios for them, I have not yet come up with.
Eviction policy
How the cache will be cleared in case of overflow. For inclusion, only one option is available so far - LRU (Least recent use). Those. which object was least accessed - it will be deleted.
Expiration Type and Time To Live
Two interrelated parameters indicate how and for how long (in minutes) they will expire in the cache and disappear from it. Those. if the cleaning parameter is more emergency (the cache overflow situation usually doesn’t lead to anything good), then obsolescence allows us to describe how objects should disappear from the cache during normal operation.
None Objects will be cached forever (until reboot). Requires lifetime to be set to zero.
Absolute The object is stored in the cache a certain time after it got there.
Sliding window.My favorite option. The object will disappear from the cache after the specified time after the last access. Those. objects that are accessed will constantly live in the cache.
Configure Cache Client
In general, everything is quite simple: in the configuration file we describe what caches we have and where we are located. Insert the following lines into the configuration file in the configuration section (their template should already have been created by NuGet when installing the cache package).
…
As an identifier, you must specify the name of the role containing the cache in the project. In our case, CacheWorkerRole. Not the name of the access point in Azure, such as mycoolapp.cloudapp.net.
Explanation, probably, requires only the localCache tag, which indicates that the instance can store objects locally and the principle of storage. objectCount determines how many objects we will store locally, when the specified number is reached, the cache will remove from the local copy 20% of the objects that have not been accessed the longest.
Time synchronization (TimeBased) indicates that the objects will be stored in the local cache specified in ttlValue the number of seconds. NotificationBased synchronization requires the notification mechanism to be enabled in the cache. In this case, the ttlValue parameter indicates how often the local cache will check for changes in the cache.
Basic settings completed. Now, for an example, we will connect sessions of our web role to a cache. IIS does this very simply by replacing the standard InProc session provider with a cache provider.
A session is a great example of where you can use the cache. But you should not be limited only to her. For example, you can put pages in a cache, then other instances do not have to waste time building them. And of course, we can and should add our own data to the cache.
Before moving on to examples of working with a cache from code, let us dwell on a completely new type of cache.
Cache service
While the cache as a service is available in preview mode (preview). It may be in demand in scenarios where different solutions must have access to the same data. In-Role cache is only available within the solution to which it is tied. The cache service has no such drawback.
Setting up the cache service one-on-one is the same as setting up the In-Role cache, but you need to conduct it not in the studio, but on the Azure management portal. A cache access key will be added to the role configuration.

Other advantages of the cache service include:
- a slightly lower price;
- lack of headache during deployment updates (they will not affect the cache);
- support for the memcached protocol, which allows you to connect not only PaaS solutions to it, but also any type of virtual machine.
Some more setup words
Creating a cache is worth knowing how many objects, how much will be stored in it, and with what frequency they will be read and written. When these values are known, you need to drive the data into one of the excel plates (for service or roles ) and get recommendations on how much and what you need in terms of tariffed units.
Objects are stored in a cache in serialized form, therefore, to determine the volume of an object, you need to get the size after serializing the object itself and its key.
The size of the object in the cache after serialization is limited to 8 megabytes. If you go beyond the limit, you can enable compression for the cache, then objects will be compressed before they get into the cache. In general, compression can have a positive effect on cache performance if packing and unpacking costs are less than network transmission costs. Unfortunately, this can only be determined experimentally.
To increase cache throughput, you can increase the number of connections to it with the maxConnectionsToServer parameter. By default, only one cache connection is created.
Working with a cache from code
Everything necessary for working with the cache lives in Microsoft.ApplicationServer.Caching.
First, create a cache object, and with the Add, Put, Get, and Remove commands, start working with the data.
DataCache dc = new DataCache("default");
dc.Add("test", DateTime.Now); //добавить объект в кэш
dc.Put("test", DateTime.Now); //добавить или заменить
DateTime dt=(DateTime)dc.Get("test"); //получить
dc.Remove("test"); //удалить
Race
To prevent the race, you should use GetAndLock, PutAndUnlock and Unlock. The GetAndLock statement does not block a regular Get and does not interfere with dirty reading.
try
{
DataCacheLockHandle lockHndl;
object value = dc.GetAndLock("test", new TimeSpan(0, 0, 5), out lockHndl);
//модифицируем объект
dc.PutAndUnlock("test", value, lockHndl);
//или dc.Unlock("test", lockHndl) если ничего не меняли
}
catch (DataCacheException de)
{
if (de.ErrorCode == DataCacheErrorCode.KeyDoesNotExist)
{
//объекта нет
}
}
Reading updates
It is not difficult to imagine a scenario in which you need to perform some action when changing an object in the cache. You can get the object and compare it with the current value, but you can reduce the cache load using the GetIfNewer method.
object val = DateTime.Now;
DataCacheItemVersion version = dc.Put("test", val);
while (true)
{
val = dc.GetIfNewer("test", ref version);
if (val != null)
{
//объект изменился
}
Thread.Sleep(1000);
}
If the object appeared in the cache somewhere else, you can get its version from the DataCacheItem object.
DataCacheItem dci = dc.GetCacheItem("test");
DataCacheItemVersion version = dci.Version;
object val = dci.Value;
Regions
To group objects, you can use regions. In each of the above methods, the second argument can pass the name of the region (after creating it) in which you want to create an object. And after that it’s easy to iterate over objects in the cache.
if (dc.CreateRegion("region"))
{
//региона не было, он создан
}
dc.Put("test", DateTime.Now, "region");
foreach (KeyValuePair kvp in dc.GetObjectsInRegion("region"))
{
//обрабатываем объекты
}
Regions have a couple of features that should be considered when working with them.
Do not iterate over objects in the region directly as in the code above. If another thread adds or removes an object in it, we will run into the exception "collection has changed."
The second feature: the region lives within one instance. Those. if the distribution of objects by region is uneven, then a situation may arise when one of the copies is idle and the other is boiling under load.
Things to keep in mind when working with the cache
- The size of the object in the cache is limited to 8 megabytes
- If regions are used, they should be filled evenly
- Cache objects locally whenever possible.
- Enable high availability only where you need it.
- Use locks (GetAndLock) only where necessary
- Do not read objects if they are not updated (use GetIfNewer)
I hope this article helps others walk through fewer rakes than I collected. Good luck with your development, let your applications be fast.
Google plus