Hello!
On Wed, Feb 10, 2016 at 9:04 PM, Alexandr Roman wrote:
> Have a weird trouble with my openresty. As i catch lua spent a little bit
> more time on each request. Approx +0.05 sec. After few days in production
> server able to process ~30 requests per second (2xCPU, 4GB RAM, openresty +
> redis + rabbitmq on the same box).
Are you suggesting performance degration here? Is it CPU bound or IO
bound? How is your nginx workers' CPU usage?
It's better to sample some on-CPU and off-CPU flame graphs for your
nginx workers under load when something happens:
https://openresty.org/#Profiling
> GarbageCollector show 290KB constantly on
> each request. Did try to replace my lua script with simple
>
Are you sure it's growing *forever* or just intermittent? I'm asking
this because most of the garbage collector does incur some delay in
collecting garbage. And you may want to tune the GC pace to fit your
use cases via the collectgarbage() (like tuning the setpause
parameter).
> content_by_lua_block {
> ngx.log(ngx.ERR, os.clock())
> ngx.log(ngx.ERR, string.format("GC: %dKB", collectgarbage("count")))
> collectgarbage("restart")
> }
>
I've tested this example on my side with many requests (ab -n100000 -k
-c2 localhost:8080/t) and the error.log entries show the GC count is
always between the range 51K ~ 64K, no matter how many requests I keep
sending to the server.
BTW, restarting the GC upon every request is a bad idea. Also, using
os.clock() is also a bad idea since it may incur (expensive) system
calls.
It's wrong to just send a few test requests and check how the GC count
changes due to the inherent delay in collecting garbage with a mark
and sweep GC algorithm. You need some load to reason about memory
leaks.
>
> Result is the same but numbers much lower.
>
> Any clues welcome.
>
> nginx -V
> nginx version: openresty/1.9.3.2
You may also want to upgrade to the latest 1.9.7.3 formal release.
Best regards,
-agentzh