I introduce a plain lua phase commands. And this lead to a 20% performance redution. Is this normal or somethings is wrong with the way I introduce this module.
version info:
------
nginx version: nginx/1.13.6
lua-nginx-module-0.10.11
LuaJIT 2.1.0-beta3
lua-resty-core-0.1.15
------
perf_proxy is origin server in the same LAN,and the third server in the same runs the wrk.
config file is as below.
----------
...
worker_processes 1;
worker_cpu_affinity 100;
...
init_by_lua_block {
require "resty.core"
local waf = require "waf"
waf.init()
}
server {
server_name www.perf.com;
listen [::]:80 http2 reuseport;
access_by_lua_block {
}
header_filter_by_lua_block {
}
body_filter_by_lua_block {
}
log_by_lua_block {
}
gzip on;
gunzip on;
location / {
proxy_pass http://perf_proxy;
}
}
---------
when there is no *_by_lua_block commands, I got the result:
[root@f227 wrk-master]# ./wrk -t3 -c100 -d60s http://www.perf.com/index.html
Running 1m test @ http://www.perf.com/index.html
3 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 9.54ms 1.03ms 29.94ms 81.69%
Req/Sec 3.47k 249.98 3.98k 82.67%
621995 requests in 1.00m, 788.34MB read
Requests/sec: 10363.66
Transfer/sec: 13.14MB
when added *_by_lua_block commands, result got:
[root@227 wrk-master]# ./wrk -t3 -c100 -d60s http://www.perf.com/index.html
Running 1m test @ http://www.perf.com/index.html
3 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 12.33ms 1.51ms 56.12ms 89.70%
Req/Sec 2.69k 185.09 3.03k 90.17%
481782 requests in 1.00m, 610.63MB read
Requests/sec: 8026.02
Transfer/sec: 10.17MB
Requests/sec dropped from 10300 to 8000.
Flame graph is attached. Search "lua" from the flame graph, it takes about 20% too.
Anyone has the same situation with me?
Attachment:
c_plain_lua.svg
Description: image/svg