Thanks. I assume 1GB limit is for Lua based cache (resty.lrucache) and not for ngx.shared.DICT.
So essentially, I can allocate 4GB of 10 ngx.shared.dict caches and may be 1000 keys for lrucache per worker.
> On Nov 15, 2015, at 9:37 AM, Yichun Zhang (agentzh) <age...@gmail.com> wrote:
>
> Hello!
>
>> On Sun, Nov 15, 2015 at 10:19 PM, RJoshi wrote:
>> Thanks @agentzh.
>> Do you see performance impact if number of keys increased in resty.lrucache given implementation is based on queue?
>
> It depends on the key patterns and your use cases. Also, mind you,
> there is a memory limit in GC-managed memory in each LuaJIT VM (which
> is per worker). The limit is 4GB on i386 and 1GB on x86_64 (with the
> luajit-mm plugin, the limit can be 2 GB on x86_64).
>
>> I have ~50GB RAM available which can be utilized towards caching API request Uri and its responses. What would be your recommendation?
>
> Use big memory for shm-based stores and moderate numbers of keys (like
> 1000ish?) in VM-level cache stores (like lua-resty-lrucache).
>
>> How much should be allocated for resty.lrucache and how much for ngx.shared.DICT ?
>
> See above.
>
>> Do you recommend creating multiple shared.dict vs one big may be based on hashing the request uri?
>
> Multiple shm dicts and manual sharding can be more efficient.
>
> Regards,
> -agentzh.