Thanks for the great responses agentzh and Vladislav.
>We use 2 levels of caching (lua-resty-lrucache and ngx.shared.DICT)
before our memcached-like data service based on sockets in production,
for example.
If I might ask, what sort of things do you store in the LRU cache? Do you store all scalar data in shared dicts and Lua tables in the LRU cache?
Is storing even a scalar value like an integer in the LRU cache more performant than putting it in ngx.shared.DICT? Do you use the LRU cache as your very first cache layer, and only store data in a shared dict if it isn't a table and absolutely needs to be shared among all worker processes, so that it won't be recomputed needlessly?
Separate workers aside, is one or the other better for handling very frequent cache check and/or cache hit rates? And are there any other variables that come into play when deciding which of the 2 to use as storage?
Again, just trying to get some general ideas.
I think I'm likely going to use shared dicts instead of Redis for most of my ephemeral caching, but I'd definitely like to learn as much as I can about all of the different options.
>It'll be very hard to do reasonable predictions without knowing data
set and use patterns well enough. And yeah, computer engineering is
hard :) And that's why profiling and experiments are always
recommended.
Absolutely. Right now, the load my app gets is low enough that I could get away with using Redis for everything with absolutely no problems, but I expect the load to increase significantly in the future. If I start running into caching performance or memory problems I'll definitely profile my current solution and compare with a few alternatives before coming here.
Thanks,
Ian
On Thursday, August 21, 2014 3:20:02 PM UTC-4, agentzh wrote:
Hello!
On Thu, Aug 21, 2014 at 12:19 PM, Yichun Zhang (agentzh) wrote:
> We use 2 levels of caching (lua-resty-lrcache and ngx.shared.DICT)
Sorry, typo here. It should be lua-resty-lrucache:
https://github.com/openresty/lua-resty-lrucache
Regards,
-agentzh