Hi,
I've got a cache implemented using OpenResty using custom LUA code talking to memcache.
One of the problems I hadn't solved to date was implementing a barrier to reduce concurrent requests for the same resource to the backend.
I've sketched out the logic I think I need to implement for that, and wanted to ask for input from the folks on this list.
Looking at the https://github.com/openresty/lua-resty-lock library, it appears I could:
1. try to send from cache
2. if that fails, lock on an id unique to the request url
3. if we had to wait for a lock, try to send from cache again
4. if 2nd cache hit succeeds, unlock and return
5. if 2nd cache hit fails, pass to the backend, store results, and unlock
Does that sound correct? I've sketched out the code below:
-- try to send from cache
if cache_send() then
return
end
-- cache send failed, lock for pass
local proxy_lock, err = lock:new("proxy_locks")
if not proxy_lock then
ngx.log(ngx.ERR, "error initializing proxy_locks: ", err)
end
if proxy_lock then
-- vid is a hash of the request host and path
local lock_id = self.vid
if self.rid then
-- rid is vid + any vary details picked up
-- from the first cache_send attempt
lock_id = self.rid
end
local elapsed, err = proxy_lock:lock(lock_id)
if err then
ngx.log(ngx.ERR, "error acquiring lock: ", err)
else
local proxy_lock = function()
local ok, err = proxy_lock:unlock()
if not ok then
ngx.log(ngx.ERR, "error unlocking: ", err)
end
end
-- self.cleanup is run after the request completes
table.insert(self.cleanup, proxy_unlock)
-- if we had to wait for the lock, try a 2nd cache hit
if elapsed > 0 then
if cache_send() then
-- 2nd cache request succeeded
for i, cleanup in pairs(self.cleanup) do
cleanup()
end
return
end
end
end
end
-- fall through, proxy to backend, self.cleanup
-- will be run after it completes
pass()