On Wed, Aug 30, 2017 at 7:36 PM Yichun Zhang (agentzh) <
age...@gmail.com> wrote:
> Hello!
>
> On Wed, Aug 30, 2017 at 7:57 AM, Jim Robinson wrote:
>> So everything I've read indicates it's a very bad idea to attempt perform
>> file i/o using stock lua io libraries from within OpenResty LUA code.
>> Does using a ramdisk for the file mitigate any of the problems, or is it
>> still a really bad idea?
>>
>
> It's not really a bad idea as long as the disk I/O operations are
> relatively small and their latency is manageable.
>
> NGINX core uses blocking file I/O by default for its http cache,
> access logging, and error logging, for example. Just don't panic.
>
> If you use ramdisk, then you may probably just use the shared dicts
> API instead since the latter does not incur any file system call
> overhead at all.
Ah, that's good to know. The specific details are that I'm finally
moving my openresty basd lua code from using location_capture to
using lua-resty-http. That gives me access to handling a streaming
response, which is a big win -- I can start sending the response
back to the client before waiting for all of it to be downloaded.
But now I need to be able to store the response somewhere until
it's completely sent so that I can then turn around and add it to
a custom cache layer.
The particular use case I'm intersting in involves big files, e.g.,
400 megabyte files fetched from the backend.
I hadn't thought about using the shared dicts api, I guess that
would make it just as easy to get access to the data from a separate
coroutine as using a shared file. I'll look into that.
As always, thank you for the advice.
Jim