Hi,
I have a server endpoint that is receiving batched data that I want to write to file. Each batch is N payloads that I would like to have on N lines in a log file that I will eventually push to S3.
My current solution is to create a custom access log with a variable as the last element that I populate with a JSON array that contains my N payloads. I then have to use an ETL step to read that JSON array and put each one on a line (with Amazon EMR and Cascading). I would really like to remove this step and get Openresty to write out one line per payload in the access log or a file.
Things I have thought of or tried that don't work:
- writing to a file directly in LUA which is highly discouraged because of having to manage the locks
- creating an internal endpoint and hitting it N times with ngx.location.capture. No logs are written even if there is an access_log directive in the configuration.
- create a real HTTP request in LUA using a third party library to make N request to a non-internal location on the same server with a custom access_log directive. It seems very wasteful to require all the extra HTTP overhead for essentially an internal request.
It seems that Nginx is already so good at managing writing out data to file and dealing with rotation with the USR1 signal. I really want to figure out how to piggy-back on that.
Any ideas would be appreciated!
Thanks,
Jawad