Hello OpenResty folks!
Recently, I have been working on a new caching module for hot code paths
in OpenResty applications. I called it lua-resty-mlcache and just
released its 1.0.0 version:
https://github.com/thibaultcha/lua-resty-mlcache
It is well tested and available on opm and LuaRocks :)
Its goal is to combine the power of lua-resty-lrucache, lua_shared_dict,
and lua-resty-lock to provide a full-fledged solution for caching values
retrieved via I/O operations (such as database reads). The README has a
nice little diagram to have a better idea of how it works because a
picture is worth a thousand words!
Here is a simple usage example:
local cache, err = mlcache.new("cache_shared_dict", {
lru_size = 1000
})
local my_value, err = cache:get("my_key", { ttl = 3600 }, function()
-- I/O lookup
return my_row
end)
In this example, 'my_value' will be retrieved from the callback the
first time, and from lua_shared_dict or lua-resty-lrucache on subsequent
calls.
It supports a lot of useful features for a typical key/value caching module:
* Caching and negative caching (cached misses) with TTLs and negative
TTLs.
* Built-in mutex via lua-resty-lock to prevent dog-pile effects to your
database/backend on cache misses.
* Built-in inter-workers communication to propagate cache invalidations
and allow workers to update their L1 (lua-resty-lrucache) caches upon
changes (set(), delete()).
* Multiple isolated instances can be created to hold various types of
data while relying on the same lua_shared_dict L2 cache.
I hope this will be useful to a few people; caching on hot code paths is
- I think - a very common OpenResty paradigm :)
Kind regards,
Thibault