Hi,
On Wed, Dec 14, 2016 at 3:27 PM, Neil Fitzgerald <fitzger...@gmail.com>
wrote:
> Hey all,
>
> Newbie here, with a quick question on a performance constraint.
>
> Specifically, the docs
> <https://github.com/openresty/lua-nginx-module#ngxshareddictget_keys> for
> ngx.shared.DICT.get_keys() say:
>
> *WARNING* Be careful when calling this method on dictionaries with a
>> really huge number of keys. This method may lock the dictionary for quite a
>> while and block all the nginx worker processes that are trying to access
>> the dictionary.
>
>
> This makes sense, but it's a little vague. I'd like to know the order of
> magnitude of keys where you'd start to run into problems (let's say where
> you're stuck blocking for ~100ms on a pretty standard box). Will you run
> into issues with anything more than 1024? Or more like at 1 million?
>
> I have very little idea of how this works on the backend, so any educated
> guesses would be helpful.
>
A bit of background: the underlying structure that shared dictionaries use
is a red-black tree (https://en.wikipedia.org/wiki/Red%E2%80%93black_tree).
Something like get_keys, however, is going to be a linear operation
(actually O(2n) iirc from an earlier post on this list). The lock that the
docs note is the same lock that is applied for other operations; however,
because those operations are not linear, it's less of a performance impact
as the size of the dictionary grows.