Hello, everybody
We are running a system of multiple NginX instances (OpenResty), the
communication between them being made via HTTP requests. We will need to
extend and optimize this feature, so I have 2 questions:
1. Which method of the following 2 is faster?
a. Using proxy:
- client NginX has an upstream defined towards the backend and
an internal route (location) that is consumed using
"ngx.location.capture"
http {
upstream backend {
server 127.0.0.1:11001;
keepalive 32;
}
...
location ~ /api/(call1|call2)$ {
internal;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
proxy_connect_timeout 600s;
proxy_ignore_client_abort
on;
proxy_buffering
on;
proxy_buffers
64 8k;
proxy_set_header
Connection "Keep-Alive";
proxy_set_header
Host $host;
proxy_set_header Http-Host $http_host;
proxy_set_header
X-Real-IP $remote_addr;
proxy_intercept_errors on;
}
}
And in Lua:
local res = ngx.location.capture("/api/call1", {['method'] = ngx.HTTP_POST, ['body'] = cjson.encode({["action"] = 'some_action', ['payload'] = 'some payload'})})
b. Using a cosocket HTTP client library from Lua. I would prefer this
(second) method because I find it more straightforward, but I am
concerned of
performance differences.
Is this something signigicant or not?
2. The second question is related with the point b. of the previous
question: which HTTP client would be fastest for our case? Here are some
options I found:
-
https://github.com/bakins/lua-resty-http-simple - pretty
straightforward, enough for our needs. This is what we are using
currently, but this is pretty old (2014)
-
https://github.com/pintsized/lua-resty-http - very popular, updated
this year
-
https://github.com/timebug/lua-resty-httpipe - also updated this
year, but seems less popular
-
https://github.com/tokers/lua-resty-requests - I found some
recommendations for this one; the author sais it has good performance;
also updated recently (most recently)
Do you have any other recommendations? Are there any tests regarding
the performance of these libs?
Any other advices?
Thank you
Best regards,
Bogdan
.