Hello,
I've implemented a custom LUA logging routine using the
https://github.com/cloudflare/lua-resty-logger-socket library. Basically, I build a json message for each request (~400-500 bytes per message) and send that to a Redis server. Everything works fine, even under heavy load.
However I'm facing an issue in case of a Redis failure. If I stop Redis on purpose to simulate an outage, a lot of log messages are lost and not sent to the socket when Redis is restarted. New events are sent normally but only a small part of the events buffered during the failure is processed.
I've tried increasing the drop_limit parameter to 100MB and more :
local ok, err = logger.init{
host = '10.0.0.40',
port = 6379,
-- Max backlog = 100MB
drop_limit = 104857600
}
[... build json_msg variable ...]
local send_msg, err = logger.log('RPUSH logs:test_queue ' .. "'" ..
json_msg .. "'" .. '\r\n')
but with no success... Under certain random circumstances I also receive a huge spikes of events after restarting Redis and old entries from the backlog can be sent more than 10-20 times each.
Have you ever encountered a similar issue ? Thanks for the pointers !