Thanks agentzh for the info.
I checked and I saw that the memcache CPU usage is almost 0 and the concurrent connections is around 20. This server though has another file being served (which is being done by lua-resty) and that has around 1k requests/sec which can also connect the memcached (if lookup fails in nginx-dict) but uses the lua-resty-memcached module to connect to the memcached.
For memcached we are using the default no. of threads which is 4. Will try and increase that to match the no. of cores.
As per the tmpfs as it would be volatile, need to see how we can have it updated after each restart. Thanks again for all the info.
On Wednesday, 6 February 2013 01:12:37 UTC+5:30, agentzh wrote:
Hello!
On Tue, Feb 5, 2013 at 12:13 AM, <ankit...@gmail.com> wrote:
> I am trying to setup a server where in I have to serve static JS files
> (total of 4 files ~ 7-30KB in size). I am currently using SRCache to do it,
> but I am not happy with the memcached required with it to store the content
> as memcached is showing Connection Timeouts when the no. of connections
> increases. The config I currently have is as following:
>
>
>> upstream memcached_backend {
>> server unix:/tmp/memcached.sock;
>> keepalive 32;
>> }
>
What's the number of concurrent requests that your Nginx is handling?
You may want to increase the keepalive connection pool size (32).
Also, what's the CPU usage of the memcached process under load? Is it
really high? You may want to enable the multi-thread support in your
memcached server too. And/or just enable multiple memcached instances
and do a shard in your location = /memc via something like
set_hashed_upstream (from the ngx_set_misc module).
Also, you may get better results when just putting your static .js
file on a tmpfs partition instead of using any server-side caches. The
ngx_srcache module is best for dynamic response caching.
Best regards,
-agentzh