我在做一个比较复杂的聊天业务用了websocket和redis
因为现在openresty不支持全双工socket,所以起了2个redis链接分别做读取。
在应用层这边也需要2个redis链接,一个负责对session的控制(session存在同一个redis中,生产环境将会是2个),一个负责pubsub
现在我遇到的情况就是如果同时用set_keepalive返回redis连接到连接池的话,会出现worker退出
log如下
2014/03/31 19:33:41 [info] 60929#0: *1 [lua] pusher.lua:51: closing, client: 127.0.0.1, server: , request: "GET /1/1/pusher HTTP/1.1", host: "127.0.0.1:8060"
2014/03/31 19:33:41 [notice] 60929#0: *1 [lua] pusher.lua:57: irc:1:1:pubsub, client: 127.0.0.1, server: , request: "GET /1/1/pusher HTTP/1.1", host: "127.0.0.1:8060"
2014/03/31 19:33:41 [notice] 60929#0: *1 [lua] pusher.lua:57: irc:1:2:pubsub, client: 127.0.0.1, server: , request: "GET /1/1/pusher HTTP/1.1", host: "127.0.0.1:8060"
2014/03/31 19:33:41 [notice] 60929#0: *1 [lua] pusher.lua:57: irc:1:3:pubsub, client: 127.0.0.1, server: , request: "GET /1/1/pusher HTTP/1.1", host: "127.0.0.1:8060"
2014/03/31 19:33:43 [notice] 60928#0: signal 20 (SIGCHLD) received
2014/03/31 19:33:43 [alert] 60928#0: worker process 60929 exited on signal 11
2014/03/31 19:33:43 [notice] 60928#0: start worker process 60964
2014/03/31 19:33:43 [notice] 60928#0: signal 23 (SIGIO) received
最小化代码
local server = require "resty.websocket.server"
local redis = require "resty.redis"
local ws, err = server:new {
timeout = 6000000,
max_payload_len = 65535
}
local red1 = redis:new()
local red2 = redis:new()
red1:set_timeout(1000)
red2:set_timeout(6000000)
red1:connect("127.0.0.1", 6379)
red2:connect("127.0.0.1", 6379)
local res, err = red2:subscribe("dog")
if not res then
ngx.log(ngx.ERR, err)
end
res, err = ws:send_close()
ngx.log(ngx.INFO, "closing")
if not res then
ngx.log(ngx.ERR, err)
end
red2:unsubscribe("dog")
red1:set_keepalive(1000, 100)
red2:set_keepalive(2000, 100)