Hello,
I've used both mentioned setups. I think it's highly specific to your application and what you're trying to accomplish. I prefer B, if the application/network are conducive do it. This may be problematic in latency intolerant applications. I've even done B with an intermediate 'midgress' openresty layer in between.
Setup B has some unique features that I like from an operational perspective:
* For networks that do not have upstream load balancers, you can implement sticky sessions to your backend.
* With sticky sessions you can then implement rate limiting (limit_req) with a much higher degree of success
* Applications not using php-fpm that bind to interfaces directly (nodejs), it allows your application tier to have a static upstream configuration. In setup A, the upstream openresty needs to know about each host/port combination. It is a common pattern for nodeJS applications to launch one instance per logical CPU core, so you end up with applications listening across a range of ports.
* This setup provides an excellent hooking point for adding very complex healthchecks in front of applications that do not support healthchecks, or whose healthchecks are do not cover enough to be useful.
* For Geo aware applications, openresty is faster than most applications at performing geo lookups. As an example, nodejs is extremely limited in the amount of memory it can allocate (I think it's ~2GB), loading a geo database for each application process is inefficient and adds lookup complexity to your application.
* real_ip module is much easier to deal with in nginx than it is php-fpm.
* Unified access and error logging files across multiple application instances on the same host.
* I use headers to implement small payload messaging between upstream load balancers and downstream application tier.
* With resty LRU cache or ngx.shared.dict, you can create a restful interface that gives the illusion of application IPC. Again, this is a good fit for nodejs because the native LRU cache degrades performance in memory constrained applications. Rather than allocate a cache for each app instance you now share state across all workers.
There are a lot more, but these are just a few off the top of my head. Most of these are pretty specific to applications/networks that I work on and may not be as useful for a simple wordpress type app.
On 09/29/2015 08:59 AM, Michael Park wrote:
> Hello,
>
> I'm curious on what other nginx/openresty users prefer when they have
>
> 1. one (proxy server) server
> 2. multiple (web app server) php-fpm servers
>
> You can setup nginx/openresty (proxy server) so that it proxies to
> multiple (web app server) php-fpm servers directly like
> A. (proxy server)nginx/openresty <-> (web server)php-fpm
>
> Or you can setup nginx/openresty (proxy server) so that it proxies to
> multiple (web app server) php-fpm servers indirectly like
> B. (proxy server)nginx/openresty <-> (web server)nginx/openresty <->
> (web server)php-fpm
>
> A would have an advantage of everything being logged in one server and
> easier to manage, and probably a tiny bit faster
> B would have an advantage of being able to configure each server
> specifically such as using the
> tengine http://tengine.taobao.org/document/http_sysguard.html sysguard
> module.
>
> Personally, I prefer using the A setup (with a combination of munin,
> monit for monitoring).
>
> What setup do you prefer?.