On Wed, Apr 18, 2012 at 8:56 AM, Markus Walther <dr.marikos@gmail.com> wrote:
> Hi agentzh,
>
>> I want to ask you a few questions:
>>
>> 1. What's your version of ngx_openresty?
>
> The latest.
>
> nginx version: ngx_openresty/1.0.11.28
>
>> 2. And what does your nginx.conf look like?
>
> It's the verbatim Benchmark config:
>
> worker_processes 1;
> error_log logs/error.log;
> events {
> worker_connections 1024;
> }
> http {
> server {
> listen 8080;
> location / {
> default_type text/html;
> content_by_lua '
> ngx.say("<p>hello, world</p>")
> ';
> }
> }
> }
>
>> 3. Is there anything interesting in your error.log? Was it flushed by
>> a lot of messages?
>
> It's empty:
>
> $ ls -l logs/error.log
> -rw-r--r-- 1 mw staff 0 16 Apr 21:36 logs/error.log
>
I've just tried ngx_openresty 1.0.11.28 on my MacBook Air (Mac OS X
10.6.8) with exactly the same nginx.conf that you're using. And here's
the results when using the weighttp tool:
$ weighttp -k -c10 -n10000 'http://127.0.0.1:8080/'
weighttp - a lightweight and simple webserver benchmarking tool
starting benchmark...
spawning thread #1: 10 concurrent requests, 10000 total requests
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done
finished in 0 sec, 618 millisec and 9 microsec, 16180 req/s, 3112 kbyte/s
requests: 10000 total, 10000 started, 10000 done, 10000 succeeded,
0 failed, 0 errored
status codes: 10000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1969520 bytes total, 1659520 bytes http, 310000 bytes data
And here is the results when using ab:
$ ab -k -c10 -n10000 -t1 'http://127.0.0.1:8080/'
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Finished 18535 requests
Server Software: ngx_openresty/1.0.11.28
Server Hostname: 127.0.0.1
Server Port: 8080
Document Path: /
Document Length: 20 bytes
Concurrency Level: 10
Time taken for tests: 1.000 seconds
Complete requests: 18535
Failed requests: 0
Write errors: 0
Keep-Alive requests: 18354
Total transferred: 3298325 bytes
HTML transferred: 370700 bytes
Requests per second: 18531.83 [#/sec] (mean)
Time per request: 0.540 [ms] (mean)
Time per request: 0.054 [ms] (mean, across all concurrent requests)
Transfer rate: 3220.47 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 1
Processing: 0 1 0.2 0 4
Waiting: 0 0 0.2 0 4
Total: 0 1 0.2 0 4
ERROR: The median and mean for the processing time are more than
twice the standard
deviation apart. These results are NOT reliable.
ERROR: The median and mean for the total time are more than twice
the standard
deviation apart. These results are NOT reliable.
Percentage of the requests served within a certain time (ms)
50% 0
66% 1
75% 1
80% 1
90% 1
95% 1
98% 1
99% 1
100% 4 (longest request)
And the results are consistent through several runs.
>>
>> Also, it's good to enable HTTP keepalive in your benchmarking tools
>> (for example, passing -k to your ab tool) so that it won't run out of
>> the dynamic port range in your Mac OS X system.
>
> OK, that changed the observed behaviour a bit, though
> nondeterministically it still failed sometimes. Here is a successful
> test with ab:
>
> $ ab -n 10000 -c 10 -t 1 -k http://127.0.0.1:8080/
> This is ApacheBench, Version 2.3 <$Revision: 655654 $>
> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
> Licensed to The Apache Software Foundation, http://www.apache.org/
>
> Benchmarking 127.0.0.1 (be patient)
> Send request failed!
It's known that ab has issues on Mac OS X Lion and requires patches to
work properly.
> Completed 5000 requests
> Finished 8607 requests
>
>
> Server Software: ngx_openresty/1.0.11.28
> Server Hostname: 127.0.0.1
> Server Port: 8080
>
> Document Path: /
> Document Length: 20 bytes
>
> Concurrency Level: 10
> Time taken for tests: 1.000 seconds
> Complete requests: 8607
> Failed requests: 7487
> (Connect: 0, Receive: 0, Length: 7487, Exceptions: 0)
> Write errors: 1
> Keep-Alive requests: 8267
> Total transferred: 6055996 bytes
> HTML transferred: 4743537 bytes
> Requests per second: 8606.43 [#/sec] (mean)
> Time per request: 1.162 [ms] (mean)
> Time per request: 0.116 [ms] (mean, across all concurrent requests)
> Transfer rate: 5913.67 [Kbytes/sec] received
>
> Connection Times (ms)
> min mean[+/-sd] median max
> Connect: 0 1 3.5 0 29
> Processing: 0 0 1.2 0 10
> Waiting: 0 0 1.2 0 10
> Total: 0 1 3.6 0 29
>
> Percentage of the requests served within a certain time (ms)
> 50% 0
> 66% 0
> 75% 0
> 80% 0
> 90% 3
> 95% 6
> 98% 16
> 99% 22
> 100% 29 (longest request)
>
> I don't see an option for keep-alive with httpload.
>
> But playing around with parameters there gives mixed results:
>
The mixed results were very likely to be caused by too many
connections in the TIME_WAIT state and your httpload client often ran
out of dynamic port range. You may consider adjust the system
parameters to enlarge the dynamic port range and/or decrease the
TIME_WAIT timeout period and/or enable HTTP keepalive in your
benchmarking tool.
I suggest you give weighttp a try :)
> GOOD CASE
>
> $ echo "http://127.0.0.1:8080/" | ~/perusio-httpload-de5a208/httpload -p 10 \
> -fetches 10000 -timeout 5 /dev/stdin
>
> 10000 fetches, 10 max parallel, 200000 bytes, in 0.674863 seconds
> 20 mean bytes/connection
> 14817.8 fetches/sec, 296356 bytes/sec
> msecs/connect: 0.228042 mean, 22.58 max, 0.048 min
> msecs/first-response: 0.434652 mean, 22.595 max, 0.214 min
> HTTP response codes:
> code 200 -- 10000
>
> BAD CASE
>
> $ echo "http://127.0.0.1:8080/" | ~/perusio-httpload-de5a208/httpload
> -p 10 -fetches 10000 -timeout 5 /dev/stdin
> http://127.0.0.1:8080/: timed out
> http://127.0.0.1:8080/: byte count wrong
> [... lots of similar lines elided ...]
> http://127.0.0.1:8080/: timed out
> http://127.0.0.1:8080/: byte count wrong
> 10000 fetches, 10 max parallel, 199000 bytes, in 25.9368 seconds
> 19.9 mean bytes/connection
> 385.552 fetches/sec, 7672.48 bytes/sec
> msecs/connect: 0.279961 mean, 195.621 max, 0.058 min
> msecs/first-response: 0.467631 mean, 84.738 max, 0.066 min
> 50 timeouts
> 50 bad byte counts
> HTTP response codes:
> code 200 -- 9950
>
> Conclusion: I don't really understand httpload / ngninx ...
>
>>
>> Regards,
>> -agentzh
>>
>> P.S. I've cc'd the openresty mailing list so that the whole team can
>> see it: https://groups.google.com/group/openresty
>
> Thanks a lot, Markus