Hi Scott,
> On Mar 18, 2017, at 12:16, Scott Cameron <rout...@gmail.com> wrote:
>
> I have a small Lua service which generates a unique ID for every user. The service checks SHM for the presence, then falls back to redis.
>
> If the key exists in redis, I populate the value (0 or 1) in to shm and then proceed.
>
> If the key does not exist in redis, I populate the value of 0 in to shm and proceed.
>
> I end up with hundreds of thousands of keys and after a day, the %sys time of nginx slowly creeps up. If left unattended, nginx goes in to a bad state 4-7 days later.
>
> A simple nginx reload seems to do the trick.
Reload? Or restart? Shared dictionaries are not cleared on a reload (HUP signal). If reloading actually does fix it, something else in play besides shared dictionaries latency.
> I use resty.redis with set_keepalive.
>
> Does the LRU become more expensive over time / number of keys?
Shared dictionaries don't have an LRU mechanism per se, though individual keys can have an expiry time assigned. If you aren't expiring or removing any keys and are constantly adding new ones, it's not surprising that you'd eventually run into performance issues- shared dict access, insert, and write is all logarithmic time in best case scenarios. If you need LRU functionality, consider using lua-resty-lrucache, which keeps a per-worker data store.
Consider posting a full, minimal example of your use case so we can enter understand what's going on :)