Hi everyone,
I hope this is appropriate for this channel. I'm looking for a bit of advice about how to set up an OpenResty proxy with a "stale" caching mechanism as a fallback measure.
The current scenario is that we have traffic hitting a CDN that proxies through to a backend service. The CDN follows the cache control headers in the responses and handles all of our caching for us (generally 30 minutes).
External --> CDN (with cache) --> Backend
This works fine but we'd like to add some resilience in case of temporary outages in the backend service.
The solution I'm working on is to have an OpenResty service between the CDN and the backend that can proxy all requests through and capture the responses to a Redis instance. When the backend returns an error (or a timeout occurs) we'd like to use the "stale" data that was previously captured. We're only concerned with GET requests.
External --> CDN (with cache) --> OpenResty --> Backend
|
V
Redis
So in the happy case of everything going fine the cache is unused but is constantly updated with the latest response to each request, when there are problems with the backend we'd like to fall back to the cache and serve the stale data. As I understand it this will include the original response headers so the CDN will still cache based on the cache control directives.
We're looking at Redis for the cache because we'd be running OpenResty as a cluster of docker containers so a local disk cache wouldn't work.
I have a working prototype using OpenResty and Redis that seems to function as intended with small amounts of load. I was hoping someone more knowledgable about NGINX/OpenResty would be able to look it over in case there's anything obvious I've overlooked, or misunderstood something about the components I'm using.
https://gist.github.com/tmo-trustpilot/af8d9a7e3c6777edba080dff731853d3
If anything here doesn't make sense or you have alternative suggestions I'd love to hear them.
Thanks,
Toby Moore