On Mon, May 21, 2012 at 10:40 AM, agentzh <age...@gmail.com> wrote:
> Hello!
>
> On Mon, May 21, 2012 at 4:24 PM, Dirk Feytons <dirk....@gmail.com> wrote:
>> I'm investigating nginx and ngx_lua to replace our current Lua-based
>> web framework. I really like how you glued nginx's event model with
>> Lua's coroutines to create a low footprint yet scalable solution.
>>
>
> See https://github.com/chaoslawful/lua-nginx-module/wiki/Introduction
>
>> One question though: assuming only one worker process is it correct
>> that issuing a subrequest using ngx.location.capture() to a location
>> that is also handled by ngx_lua will not really free the Lua state to
>> process other requests?
>>
>
> The top-level Lua state (i.e, the VM itself) is shared across all the
> requests handled by this single nginx worker process and it is never
> freed until the termination of the nginx worker process. For every
> (sub)request, ngx_lua creates a new lua_State by calling
> lua_newthread:
>
> http://www.lua.org/manual/5.1/manual.html#lua_newthread
>
> For now, there is one-to-one relationship between nginx (sub)requests
> and the Lua coroutines. But in the near future, it will become a
> one-to-many relationship.
Yeah; I'm familiar with Lua's coroutines.
Good to hear you're working on lifting the current limitation on
coroutine use in Lua user code.
Could you clarify when exactly ngx_lua is yielding and allowing nginx
to continue with its event loop?
>> Reason why I ask: in the Lua web framework I'll need access to the
>> rest of our system for authentication, fetching data and such. I can
>> do this in two ways:
>> (1) Create Lua bindings for those APIs and wrap them in a nice layer
>> around ngx.location.capture() so for the web guys it looks like simple
>> direct synchronous calls while still allowing nginx to remain
>> responsive. This has the added benefit that those Lua bindings can be
>> reused outside of nginx for other Lua scripting purposes.
>
> You can also take advantage of the ngx_lua cosocket API to do
> synchronous and non-blocking calls:
>
> http://wiki.nginx.org/HttpLuaModule#ngx.socket.tcp
I think I prefer a FastCGI solution then; otherwise I have to invent
my own protocol to communicate with the other side.
>> (2) Create a dedicated nginx module in C to do more or less the same.
>>
>
> Coding in C is hard, and it is even harder when coding in C for Nginx ;)
>
> We've been working so hard on ngx_lua just to eliminate coding in C for Nginx ;)
I'm not afraid of C :)
But indeed, if I can avoid it in this case I'll be happy to.
Documentation of nginx's C APIs seems to be ... lacking.
>> I suspect (1) is not really allowing ngx_lua to process another
>> request because the Lua state remains in use?
>>
>
> No, each Lua coroutines have its own state and they're cleanly
> separated most of the time (except in the context of Lua modules, see
> http://wiki.nginx.org/HttpLuaModule#Data_Sharing_within_an_Nginx_Worker
> ).
I was referring to the main Lua state but in hindsight my question was
pretty confusing and the answer obvious: of course you can't process
multiple requests involving Lua in parallel with one worker process.
This is clearly demonstrated with adding the following to nginx.conf:
--------------8<--------------
location /test3 {
default_type 'text/plain';
content_by_lua '
ngx.say("test3")
';
}
location /test2 {
default_type 'text/plain';
content_by_lua '
ngx.say("sleeping")
os.execute("sleep 10")
';
}
location /test1 {
default_type 'text/plain';
content_by_lua '
ngx.say("hi there")
local res = ngx.location.capture("/test2")
ngx.say("subreq status = ", res.status)
ngx.say("subreq body = ", res.body)
ngx.say("the end")
';
}
--------------8<--------------
Request /test1 and then, in parallel, request /test3. The latter will
not return anything until the former has completed.
>> Note that this has to run on an embedded device so footprint is
>> important. That's why the ngx_lua model is so appealing: a master
>> process, one worker process and one Lua state for everything. My only
>> concern is that it shouldn't become too complex to keep everything
>> responsive.
>>
>
> Looking forward to your experiment results here ;)
I'll try to keep you guys informed.
Maybe you'll get more pull requests from me when we go for ngx_lua :)
> Best regards,
> -agentzh
>
> P.S. I'm cc'ing the openresty mailing list so that other people can
> see our discussion here: https://groups.google.com/group/openresty
Thanks for the feedback,
Dirk