Thank you very much again.
I have quite big data file with structure inspired by MaxMind DB - https://maxmind.github.io/MaxMind-DB/. Access to records is very fast with pointer math and there is no memory overhead - I already tried in C.
You might be right suggesting redis but allocating enough memory with ffi and moving there the content of the file on "init_by_lua" would work better in such cases. Faster and less resources eaten probably.
But will try both solutions :-)
v
On Tuesday, August 23, 2016 at 4:27:34 PM UTC+2, rpaprocki wrote:
Hi,
On Mon, Aug 22, 2016 at 6:50 PM, Wiktor Sadowski
<dsgnv...@gmail.com> wrote:
Thanks a lot!
I knew about "fd:close()". But I'm not that happy that io.read returns string and a pointer :-)
For a file over 200MB a pointer would be more sensible in my opinion.
How exactly would this work? Would you be reading from various portions of the file from disk for every access? That would absolutely destroy performance.
So maybe a memory map file ? Would it make sense ? And how to make it released on nginx exit ?
I don't think Lua provides an interface to mmap a file. You will probably need to write a native C module for something like that.
You also may want to examine your approach here. Something like redis might make a lot more sense than seeking all over a bare file, even if loaded entirely into memory. Maybe describe a bit more of what you're trying to accomplish so we can better understand the problem? In general, native Lua file IO is not performant resty ecosystem, and should be avoided.