Hello Team,
We've always seen that 1% of upstream requests result in a new TCP connection, which makes sense, since the default for keepalive_requests has always been 100.
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_requests
Today we tried setting keepalive_requests 100000; expecting that (given our traffic load per worker is sufficiently high and busy) new connections would shrink below 1%.
However, even after deploying the new configuration, with no errors, we still see 1% of requests result in a new upstream connection, and upstream keepalives ending after 100 requests.
We have upstream configuration like this:
upstream backend {
server 0.0.0.1; # placeholder
balancer_by_lua_file /etc/nginx/lua/upstream.lua;
keepalive 10000; # default no keepalives, per worker
keepalive_requests 100000; # default is 100
keepalive_timeout 59; # default is 60s, must be lower than upstream
}
We suspect that balancer_by_lua_file is not respecting keepalive_requests custom value.
Can anyone shed any light on this?