Hi, I am using a piece of C++ code via luajit ffi, currently I use it like this:
1 module define:
local ffi = require("ffi")
ffi.cdef[[
void* init(const char*, const char*, const char*, const char*, int) asm("so export name");
float* extract(void*, const char*) asm("so export name");
]]
local test = ffi.load("my.so")
local _M = {}
function _M.extract(instance, file)
local result = test.extract(instance, file)
return result
end
function _M.init()
local instance = test.init(....)
return instance
end
return _M
2 init_by_lua:
local mo = require("init_extract")
instance = mo.init()
3 content_by_lua:
local mo = require("init_extract")
local feature = mo.extract(instance, content)
It worked fine and the access is ok.
I use a global variable "instance" to share this instance in nginx work process, as each instance cost plenty of memory, if each work process get a copy of it will course memory insufficient. I observed that, using shell command, each work process as well as the mater process consumed equal memory.
So I plan to use lua shared dict to hold this 'instance' reduce memory use, but I encountered problem;
1 module define:
same as above
2 init_by_lua:
local mo = require("init_extract")
local shared_data = ngx.shared.shared_data
shared_data:set("instance", instance)
3 content_by_lua:
local shared_data = ngx.shared.shared_data
local instance = shared_data:get("instance")
local feature = mo.extract(instance, content)
when I restart nginx, I observed that each work process as well as the mater process consumed equal memory as above,I wondering that if it really provide the advantage of reducing memory use.
And when I access it, nginx collapsed reported:
Any one has any suggestions?