So I have a fastcgi upstream as follows here
# loadbalancing PHP
upstream myLoadBalancer {
server 127.0.0.1:9001 weight=1 fail_timeout=5;
server 127.0.0.1:9002 weight=1 fail_timeout=5;
server 127.0.0.1:9003 weight=1 fail_timeout=5;
server 127.0.0.1:9004 weight=1 fail_timeout=5;
server 127.0.0.1:9005 weight=1 fail_timeout=5;
server 127.0.0.1:9006 weight=1 fail_timeout=5;
server 127.0.0.1:9007 weight=1 fail_timeout=5;
server 127.0.0.1:9008 weight=1 fail_timeout=5;
server 127.0.0.1:9009 weight=1 fail_timeout=5;
server 127.0.0.1:9010 weight=1 fail_timeout=5;
least_conn;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000+
#
location ~ \.php$ {
root html;
fastcgi_pass myLoadBalancer; # or multiple, see example above
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
Occasionally i will receive a internal server error when accessing a page processed by php.
The error logs shows this.
WSARecv() failed (10054: An existing connection was forcibly closed by the remote host) while reading response header from upstream, client:
Upon investigating it seems that the PHP process started serving / responding or sending Nginx info then cut Nginx of either because PHP crashed or was closed abruptly.
"PHP_FCGI_MAX_REQUESTS" could be closing the process for hitting the request limit.
Using Lua is it possible to prevent receiving a 500 internal server error and pass the request onto the next upstream that is what i had in mind as a good way to solve this if the process closes abruptly just take that request with lua and pass it to the next upstream in the que.
My "fastcgi_next_upstream error timeout;" Is default as seen here http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_next_upstream
Anyone know a decent way to go about solving this annoyance. Thanks in advance looking forward to light anyone can shed / share upon this dilemma :)