I store traffic data in shared dict, and update only every sec/x sec to redis.
How can this be done atomic and non blocking?
something like:
-- log to shared dict every request
local function incr(key, increment)
if not increment then
return
end
-- add key if it does not exist
local ok, err = logtbl:add(key, increment)
if not ok then
-- increment existing keys
ok, err = logtbl:incr(key, increment)
-- damn, key has been deleted, add it
if not ok then
logtbl:add(key, increment)
end
end
end
-- move to redis every sec
for idx, key in ipairs(logtbl:get_keys(0)) do
ok, err = r:cincr(key, 288, 300, logtbl:get(key), os.time())
if ok then
logtbl:delete(key)
end
end
Also, any way to run a cron-type job inside nginx, to handle scheduled tasks, visiting a internal location?
Orcorse I can do with cron or some counter, but would be nice
to be able to have a thread running and doing such tasks.. subscribing
to redis messages and so on..