Skip to main content

Distributed Caching

Moderated by: Andrea Guzzo
Average rating: *****
(5.00, 1 rating)

In, while coping with a continuous growth, we need to find solutions
which allow us to scale without having to drastically change/adapt our business logic
or introduce unnecessary complexity and/or points of failure.

To mitigate the continuous access to our underlying storages we employ different solutions.
Some involving memcached, some involving redis and some others using custom
middleware providing distributed key-value store functionalities.
Each solution came with both advantages and disadvantages and in some cases we
started experiencing issues only after we went past some threshold due to our growth.

A recent approach led us to develop in-house our own (open sourced) key/value store
and caching layer which is partially inspired to groupcache (implemented in Go) and
partially equivalent to memcached,but with the possibility to employ storage plugins
aware of our infrastructure and which can include the business logic necessary to
retrieve the data from the underlying storage (read: our databases).

During the session I’ll try describing the process and the ideas that brought us starting the development of libshardcache and shardcached, as well as our initial experiences employing it in production.

The project is hosted on github and consist of a main library libshardcache
and the daemon code shardcached