Hello!
On Thu, Sep 5, 2013 at 12:43 AM, shubham srivastava wrote:
>
> for brevity just posting the relevant code : looks like for something out
> there through which cache is ignored : also the headers out of
> Passenger_backed are not returned through the conetnt_by_lua stuff.
>
The proxy cache should work with subrequests initiated by ngx_lua's
ngx.location.capture() API.
I've corrected some minor mistakes in your nginx configuration snippet:
1. Don't use the add_header directive to add the X-Cached header in
location /passenger_backend because the response headers added by
"add_header" will not be captured by ngx_lua's ngx.location.capture.
Use the ngx_headers_more module instead (and if you're not using my
openresty bundle, then you need to ensure that
--add-module=/path/to/headers-more-nginx-module is given *after*
--add-module=/path/to/lua-nginx-mdoule when building nginx with
./configure yourself).
2. You don't forward the X-Cached response header in the subrequest to
the main request after the ngx.location.capture call.
3. Your gzip and client_max_body_size settings should be put into the
main request's location (or in its outer scope), location /, because
they have no effect in the location accessed by subrequests only.
4. Because you're using $request_body in proxy_cache_key, you should
configure client_body_buffer_size to exactly the same value as your
client_max_body_size. Otherwise the $request_body variable will take
the empty value when the request body is big enough to get
automatically buffered to a temporary file.
5. You should always actively read the request body in your main
request before initiating a subrequest. That is, you should call
ngx.req.read_body() before the ngx.location.capture() call in your Lua
code.
6. Because you've configured proxy_read_timeout and
proxy_connect_timeout, you may also want to configure
proxy_send_timeout at the same time.
7. Because location /passenger_backend is used by subrequests only,
you'd better add the line "internal;" to its location. Also, you can
add "=" to the location declarator to make it more strict.
I've tested the slightly modified version of your configurations on my
side and the proxy cache works as expected.
Below is the tested self-contained configuration which you can run on your side:
# the following line should be put into http {} directly:
proxy_cache_path /tmp/cache levels=1:2 keys_zone=small:10m;
# the following lines should be put into server {}:
location / {
client_max_body_size 1M;
client_body_buffer_size 1M;
# gzip configurations omitted here...
content_by_lua '
local res = ngx.location.capture(
"/passenger_backend",
{ method = ngx.HTTP_POST,
body = ngx.var.request_body})
ngx.header["X-Cached"] = res.header["X-Cached"]
ngx.say(res.body)
';
}
location = /passenger_backend {
internal;
proxy_cache small;
proxy_cache_methods POST;
proxy_cache_key "$request_uri|$request_body";
proxy_cache_valid 200 302 72h;
proxy_cache_use_stale updating;
proxy_buffering on;
proxy_buffers 8 32M;
proxy_buffer_size 64k;
proxy_read_timeout 120s;
proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_ignore_headers Set-Cookie Cache-Control;
proxy_ignore_headers X-Accel-Expires Expires;
proxy_pass http://127.0.0.1:$server_port/back;
# you need to configure ngx_headers_more module in
# the right way:
more_set_headers "X-Cached: $upstream_cache_status";
}
# fake http backend service for /passenger_backend
location = /back {
# here we use a bit of Lua to generate random output
# for testing.
content_by_lua '
ngx.say(math.random(20))
';
}
Here we added location = /back to serve as the fake http backend
service that location /passenger_backend is accessing. And in /back,
we output a random number by a little bit of Lua so the output should
be different upon every single request.
Here're the testing results:
$ curl -i localhost:8080/t
HTTP/1.1 200 OK
Server: nginx/1.4.2
Date: Thu, 05 Sep 2013 18:35:23 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
X-Cached: MISS
16
$ curl -i localhost:8080/t
HTTP/1.1 200 OK
Server: nginx/1.4.2
Date: Thu, 05 Sep 2013 18:35:38 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
X-Cached: HIT
16
$ curl -i localhost:8080/t
HTTP/1.1 200 OK
Server: nginx/1.4.2
Date: Thu, 05 Sep 2013 18:35:58 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
X-Cached: HIT
16
Best regards,
-agentzh