Overview and New Features of The Latest CacheManager Releases
With CacheManager v1.1 just been released and v1.0 released earlier this year I thought I should take some time to write about the new features added to this library.
New OnRemoveByHandle Eventlink
A new event called OnRemoveByHandle
has been added (see issue 116).
The event triggers when the cache layer decides to evict a key, if the key expired or because of memory limits for example.
The event transports the following information via event arguments:
- The
Key
andRegion
(if used) - The actual value stored on the key (only possible with in-memory caches)
- The cache level, a simple number indicating the cache level, starting at one.
I decided to not use the existing OnRemove
event for this mechanic as
- the arguments are slightly different
OnRemoveByHandle
gets triggered per cache handle/layer and not globally asOnRemove
and all others do
Improved Cache Synclink
If UpdateMode.Up
is used, CacheManager now also removes evicted or expired keys from layers above the layer triggering the event.
This was a key feature missing in earlier versions of CacheManager and works really nicely together with the Redis CacheBackplane to keep instances of a multi-layered cache in sync and mostly free from stale data for example.
Limitations with In-Memory Cacheslink
The implementation for this event is very cache vendor specific and so is the responsiveness.
All the in-memory caches do check for expired keys on a fixed timer or only on access. This means that the OnRemoveByHandle
event also might not get triggered immediately. The delay between the key actually expired and the event gets triggered can be seconds to minutes depending on the cache vendor.
Limitations with Distributed Cacheslink
Only Redis even a mechanic to implement this feature. Memcached and Couchbase do not support any of that therefore CacheManager can trigger OnRemoveByHandle
only when Redis is used and properly configured.
Configuration for Redis Keyspace Notificationslink
For Redis, I'm using Redis' keyspace notifications which uses the build in pub/sub system of Redis to transport the events (so, be aware of eventually even more network traffic).
To have the OnRemoveByHandle
event trigger with a Redis cache handle, two things have to be configured:
- In CacheManager's Redis configuration,
KeyspaceNotificationsEnabled
must be enabled. The flag can be configured via all the different ways we can configure CacheManager - The Redis server has to be configured to actually send keyspace notifications.
To configure Redis, add notify-keyspace-events
with a valid value to the server configuration.
The minimum value needed is Exe
, which triggers notification on evict and expire.
Hint: If you also want CacheManager to listen on
del
events, in case you delete keys manually, also addg
to thenotify-keyspace-events
setting to include all generic commands or just set it toAKE
.
Cache Backplane with In-Memory Cacheslink
The CacheManager Backplane feature was intended to be used with at least two layers of Cache where the source of truth (source layer) is a distributed cache like Redis.
Now, the backplane can also be used to synchronize multiple instances of your app even if only one in-memory cache layer is used.
The use case is pretty specific, but still, useful ;) If you have multiple instances of your app running and delete a key in one instance, the backplane will distribute this delete to all the other connected instances and delete the key in those instances, too, if it exists.
The difference to the original implementation and intention is that there is no distributed cache as source of truth.
It is important to note that the backplane never transports the cached data to store it in each instance for example, to mimic a distributed cache. If you want this functionality, just use Redis.
Also, important to note, the only implementation of the CacheManager backplane is still using Redis pub/sub, meaning, to use the feature, you have to have a Redis server running somewhere.
Bond Serializationlink
Microsoft implemented a very interesting serialization library with Bond which is mainly focused on performance over everything else.
Bond has some limitations compared to e.g. vanilla Json serialization using Newtonsoft.Json when it comes to certain types and complexity of the objects serialized, pretty similar to how Protobuf works.
The performance though is really impressive! Bond comes with a few different serializers,
CompactBinary
which tries to optimize size of the serialized byte array over performanceFastBinary
which is faster thanCompactBinary
. In many cases the difference is negligible depending on the data.SimpleJson
a serializer which uses Newtonsoft.Json but with a custom implementation which is must faster than the full Newtonsoft.Json serializer (but of course with the limitations of Bond)
Here are some performance results of all the CacheManager serializers:
Method | Mean | Scaled | Allocated |
---|---|---|---|
Json | 319.7445 us | 1.00 | 157.08 kB |
Binary | 498.2847 us | 1.56 | 327.16 kB |
JsonGz | 1,018.0015 us | 3.19 | 312.9 kB |
ProtoBuf | 135.8551 us | 0.43 | 152.68 kB |
BondBinary | 85.7551 us | 0.27 | 65.41 kB |
BondFastBinary | 83.4832 us | 0.26 | 65.7 kB |
BondSimpleJson | 232.3750 us | 0.73 | 160.55 kB |
For more details read Issue 127.
Reuse of Distributed Cache Clientslink
The CacheManager configuration for Redis, Memcached and Couchbase now allows passing in an already initialized client.
This not only makes it more flexible if some client specific configuration options are not available with CacheManager, but it also allows the reuse of the client, in case you want to do more with Redis for example than just caching.
Some Love for Couchbaselink
The Couchbase implementation of CacheManager got some improvements.
The .NET client library now supports the .NET Standard and so does the CacheManager implementation now, too.
I've added many new configuration options and made use of some helpers build into the Couchbase client to handle the cluster connections and reuse it eventually across multiple CacheManager instances.
Redis TwemProxy Support and Compatibility Modelink
TwemProxy support was pretty long on the to-do list, now it is implemented. It comes with some limitations as TwemProxy doesn't support all the Redis APIs, but those are mostly handled gracefully in CacheManager.
Also, a new compatibility mode setting has been added to the Redis configuration to explicitly set the Redis server version. This allows disabling the LUA based implementation of CacheManager for example, in case your Redis Server doesn't support it.
Other Things and Future of CacheManagerlink
Redesigned Websitelink
The CacheManager website got a small facelift and a new address: http://cachemanager.michaco.net. I hope you like it, let me know what you guys think ;)
The API documentation also has been re written a little bit, new search function for example.
Update to CacheManager Documentationlink
I'm also working on updating the existing documentation and might also add more documentation for all the configuration options. A lot has changed and there are so many more options and features now.
All this takes a lot of time and effort though ;)
List Cached Keyslink
This feature has been requested numerous times now and just received a PR. Please follow the discussion in Issue 163 and let me know what you think!
Doing this with distributed caches has a lot of limitations, only Redis supports searching for keys via Keys
or Scan
calls. Which means, this feature will not be supported if Memcached or Couchbase is used.
In addition, those are very performance intensive operations and even the Redis documentation states that those calls should not be used in production systems. That was the main reason for me, not to add this feature to CacheManager in the first place...
Implementation Optionslink
One option would be to add this feature only for in-memory caches. If you still want to search keys in Redis, you could still do that outside of CacheManager using the Redis client directly.
Or I could just document that using this feature could have a big performance impact? I'm really not happy with that as it would be very easy to use this feature wrong and cause all kinds of issues on your Redis server(s).
Again, let me know what you think on GitHub Issue 163