gaccardo



2014-12-05 16:42 GMT-03:00 Yichun Zhang (agentzh) <age...@gmail.com>:
Hello!

On Fri, Dec 5, 2014 at 4:42 AM, Guido wrote:
>
> Well, I've done it. I only se the client closing the connection after
> sending 100 requests, so the keep alive is working externaly but I don't see
> nothing regarding upstreams in the file.
>

Will you provide the raw debug logs?

Sure, I have no access to the logs today, but tomorrow I'll add them
 

>
> This is beyond my skills... I'll need more time to read about stap++ and how
> to use it.
>

See https://github.com/openresty/stapxx#installation

Regards,
-agentzh.



--
-- Guido Accardo --
"... What we know is a drop, what we ignore is the ocean ..." Isaac Newton

agentzh

Hello!

On Sun, Dec 7, 2014 at 7:26 AM, Guido wrote:
>
> Well... It seems that was a backend problem, I've modified the code of the
> backend to honor the keepalive and now it's working

Alas. We've wasted so much time here ;)

> but I'm having another
> problem, nginx response time is equal to the longest response of /prod or
> /dev. I mean, if /prod takes 10ms to answer and /dev 4secs the client will
> receive /prod answers after 4 secs. The same thing the other way also. So,
> the ngx.eof() hack seems to stooped working.
>

You should only see such things if you have enabled the downstream
keep-alive and  these slow requests are subsequent requests on the
same downstream (keep-alive) connection (note for the first request on
each downstream connection, the client should receive the response
data without any extra delay).

To quote the official documentation for ngx.eof():

https://github.com/openresty/lua-nginx-module#ngxeof

"When you disable the HTTP 1.1 keep-alive feature for your downstream
connections, you can rely on descent HTTP clients to close the
connection actively for you when you call this method. This trick can
be used do back-ground jobs without letting the HTTP clients to wait
on the connection, as in the following example:

 location = /async {
     keepalive_timeout 0;
     content_by_lua '
         ngx.say("got the task!")
         ngx.eof()  -- a descent HTTP client will close the connection
at this point
         -- access MySQL, PostgreSQL, Redis, Memcached, and etc here...
     ';
 }

But if you create subrequests to access other locations configured by
Nginx upstream modules, then you should configure those upstream
modules to ignore client connection abortions if they are not by
default. For example, by default the standard ngx_http_proxy_module
will terminate both the subrequest and the main request as soon as the
client closes the connection, so it is important to turn on the
proxy_ignore_client_abort directive in your location block configured
by ngx_http_proxy_module:

 proxy_ignore_client_abort on;

A better way to do background jobs is to use the ngx.timer.at API."

> So, my question is, It's possible to "detach" /dev's answers with keep alive
> connections enabled?
>

As mentioned by the quoted documentation above, to fully "detach" from
the original request, you have to use the ngx.timer.at API to create a
timer. But for obvious reasons, you cannot use subrequests in a timer
handler (because you have already detached from the original request,
how is it possible to further create subrequests of it?). You can use
one of the 3rd-party lua-resty-http* libraries out there, like
lua-resty-http-simple [1] instead.

The ngx.eof() work-around still works if you fulfill the requirements
stated in the documentation quoted above. If not, please provide a
minimal and standalone example that can reproduce the problem.

BTW, please do not quote more than 50 lines of the original mail text
in a single bulk. It is quote annoying for the readers and is a waste
of bandwidth.

Regards,
-agentzh

[1] https://github.com/bakins/lua-resty-http-simple

gaccardo

> Alas. We've wasted so much time here ;)

Yes!!!, thank you very much for the time you spent helping me, you've been extremely helpful.

> BTW, please do not quote more than 50 lines of the original mail text
> in a single bulk. It is quote annoying for the readers and is a waste
> of bandwidth.

I apologize for this, I won't do it anymore.

Well, I think I finally get this working using ngx.timer.at and lua-resty-http-simple as you sugested . Here is how I've acomplished it:

duplicator.lua:
----

ngx.req.read_body()

local http = require "resty.http.simple"

local function dev_callback(premature, exchange_body)
    local res, err = http.request("127.0.0.1", 80, { method = 'POST',
        path = '/dev', body = exchange_body } )

    if not res then
       ngx.log(ngx.ERR, "http failure: ", err)
    else
       ngx.log(ngx.INFO, "Async callback SUCCESS")
    end
end

local exchange_body = ngx.req.get_body_data()

r1 = ngx.location.capture('/prod', { args = arguments,
    method = ngx.HTTP_POST })
ngx.say(r1.body)

local ok, err = ngx.timer.at(0.01, dev_callback, exchange_body)


site.conf:

upstream prod0 {
        server 10.0.0.2:5000;
        keepalive 1000000;
}

upstream dev0 {
        server 10.0.0.1:6000;
        keepalive 1000000;
}


#  exchange
server {
        listen 80;
        access_log  /var/log/nginx/access-exchange-5000.log;
        error_log /var/log/nginx/error-exchange-5000.log debug;
        keepalive_timeout 600;
        keepalive_requests 100000;

        location /dev {
                proxy_ignore_client_abort on;
                proxy_pass      http://dev0;
                proxy_http_version 1.1;
                proxy_set_header Connection "";
        }

        location /prod {
                proxy_pass      http://prod0;
                proxy_http_version 1.1;
                proxy_set_header Connection "";
        }

        location / {
                content_by_lua_file /etc/nginx/luas/duplicator.lua;
        }

}

Do you see any problem of something really bad I've done here?

I hope this helps so I've copied all the configuration.

Again, I really appreciate the time you spent helping me.

Guido.-

2014-12-07 16:31 GMT-03:00 Yichun Zhang (agentzh) <age...@gmail.com>:
Hello!

On Sun, Dec 7, 2014 at 7:26 AM, Guido wrote:
>
> Well... It seems that was a backend problem, I've modified the code of the
> backend to honor the keepalive and now it's working

Alas. We've wasted so much time here ;)

> but I'm having another
> problem, nginx response time is equal to the longest response of /prod or
> /dev. I mean, if /prod takes 10ms to answer and /dev 4secs the client will
> receive /prod answers after 4 secs. The same thing the other way also. So,
> the ngx.eof() hack seems to stooped working.
>

You should only see such things if you have enabled the downstream
keep-alive and  these slow requests are subsequent requests on the
same downstream (keep-alive) connection (note for the first request on
each downstream connection, the client should receive the response
data without any extra delay).

To quote the official documentation for ngx.eof():

https://github.com/openresty/lua-nginx-module#ngxeof

"When you disable the HTTP 1.1 keep-alive feature for your downstream
connections, you can rely on descent HTTP clients to close the
connection actively for you when you call this method. This trick can
be used do back-ground jobs without letting the HTTP clients to wait
on the connection, as in the following example:

 location = /async {
     keepalive_timeout 0;
     content_by_lua '
         ngx.say("got the task!")
         ngx.eof()  -- a descent HTTP client will close the connection
at this point
         -- access MySQL, PostgreSQL, Redis, Memcached, and etc here...
     ';
 }

But if you create subrequests to access other locations configured by
Nginx upstream modules, then you should configure those upstream
modules to ignore client connection abortions if they are not by
default. For example, by default the standard ngx_http_proxy_module
will terminate both the subrequest and the main request as soon as the
client closes the connection, so it is important to turn on the
proxy_ignore_client_abort directive in your location block configured
by ngx_http_proxy_module:

 proxy_ignore_client_abort on;

A better way to do background jobs is to use the ngx.timer.at API."

> So, my question is, It's possible to "detach" /dev's answers with keep alive
> connections enabled?
>

As mentioned by the quoted documentation above, to fully "detach" from
the original request, you have to use the ngx.timer.at API to create a
timer. But for obvious reasons, you cannot use subrequests in a timer
handler (because you have already detached from the original request,
how is it possible to further create subrequests of it?). You can use
one of the 3rd-party lua-resty-http* libraries out there, like
lua-resty-http-simple [1] instead.

The ngx.eof() work-around still works if you fulfill the requirements
stated in the documentation quoted above. If not, please provide a
minimal and standalone example that can reproduce the problem.

BTW, please do not quote more than 50 lines of the original mail text
in a single bulk. It is quote annoying for the readers and is a waste
of bandwidth.

Regards,
-agentzh

[1] https://github.com/bakins/lua-resty-http-simple
.



--
-- Guido Accardo --
"... What we know is a drop, what we ignore is the ocean ..." Isaac Newton

agentzh

Hello!

On Mon, Dec 8, 2014 at 9:52 AM, Guido wrote:
> site.conf:
>
> upstream prod0 {
>         server 10.0.0.2:5000;
>         keepalive 1000000;

I wonder if it makes sense to configure the pool limit to be so huge.
Are you running some long-polling apps? For short and quick requests,
such huge pool can lead to significant performance degradation on the
backend server. Also be aware that connection pools are per-worker, so
if you have N workers, then you have N * 1000000 connections to your
backend in total.

> local function dev_callback(premature, exchange_body)
>    local res, err = http.request("127.0.0.1", 80, { method = 'POST',
>        path = '/dev', body = exchange_body } )

I don't see why you access your own server via the loopback device.
You can just connect to your dev box directly via http.request and get
rid of your location /dev altogether, which is much more efficient.
lua-resty-http-simple supports connection pooling as well.

Regards,
-agentzh

gaccardo



I wonder if it makes sense to configure the pool limit to be so huge.
Are you running some long-polling apps? For short and quick requests,
such huge pool can lead to significant performance degradation on the
backend server. Also be aware that connection pools are per-worker, so
if you have N workers, then you have N * 1000000 connections to your
backend in total.

I agree. That huge and nonsense number was a simple test when I was trying to figure out the initial problems. I've modified it.

I don't see why you access your own server via the loopback device.
You can just connect to your dev box directly via http.request and get
rid of your location /dev altogether, which is much more efficient.
lua-resty-http-simple supports connection pooling as well.

In my tests, dev was a mock service running in the localhost. In my real enviroment, nginx and dev will be in separated servers.