Hello DeJiang Zhu,
thank you for your response.
I would like to try that - the cosocket approach. Though, the problem with the python code is that I cannot enter and exit it for every incoming request.
So, I have to keep it running and somehow - send an input to the running while loop and process the output and send it back to nginx. Sending data back and forth and not executing the python program for every request - is that something that's possible with the cosocket approach?
The rewrite_by_lua and the access_by_lua :
I want the python code to execute prior to both of these phases because I am not using anything but the incoming uri for my python code. If I let it run to rewrite or the access phase, won't it add to the https handshake latency as well? I want to save on the time for that for all requests, avoiding unnecessary latency.
On Monday, 29 July 2019 06:00:55 UTC+5:30, DeJiang Zhu wrote:
rewrite_by_lua doesn't intercept the request prior to the ssl handshake
sorry, I don't got it, does `ssl handshake` means HTTPs request?
rewrite_by_lua should be same as access_by_lua for your case, maybe you made a mistake in your test?
2. lua resty shell invokes a python code - which invokes a tmux session where the ML python with the infinite while loop is already waiting for inputs. This code accepts inputs - prints a 1 or a 0 to console and waits again for the next uri and does the same henceforth for every uri.
3. The problem arises when there is a race condition - many requests invoke many tmux python intermediate code which all access a singular ML python at the waiting prompt - sometimes causing unexpected multiline prints at the console which is only supposed to accurately print a 1 or a 0 for the exact corresponding request at that given moment.
I think the better way maybe the way I mentioned above
the python program listens on a UNIX socket domain and openresty communicate with it by using cosocket.
we just treat this python program as a service, then for the race condition, we can add a request ID for it, like:
output: request-id + 0
So, is there a way we can have this ML algo running on a thread which accepts inputs, produces outputs and I don't really have to go the tmux route.
Thank you.
On Saturday, 27 July 2019 07:24:29 UTC+5:30, DeJiang Zhu wrote:
> This program reads a file, parses some values and returns a 0 or a 1 based on the input URL sent as a cli flag to it.
I do not really understand this program, does it read the URL from a file?
if this program changed to listen on a socket, like unix socket domain, then we can use cosocket to communicated with it.
or it can be invoked by a shell command, we can use resty.shell[1] to invoke it.
Hello,
I have been battling through a situation for a while. I have a python program which is a predictor for some operations - returns a 0 or a 1. This is a standalone program independent of nginx.
This program reads a file, parses some values and returns a 0 or a 1 based on the input URL sent as a cli flag to it. The execution times are quite different per execution for this program which is about 170 milliseconds as opposed to running an infinite while loop inside this program and letting it accept the URLs, the latter returning a 0 or a 1 under 2 milliseconds - parsing the file from memory.
I have tried redis, tmux sessions - but both effectively yield the same execution time as per execution - 170 odd milliseconds for redis, while the problem with libtmux is that I am unable to fetch the last line of the output (pane) per execution. It always returns the full screen text which is what I don't want.
I plan to pass nginx incoming to this program as the cli flag and deduce the actions to be taken thereafter based on the 1 or the 0 response. Effectively - I just want a way that I can access an in memory running program and supply inputs to it, fetch outputs from it - via nginx/lua and perform my calculations thereafter.
Any help will be highly appreciated!
Thanks a lot.