Hey guys
Thanks so much for all the responses
@agentzh - thanks so much for the testing recommendations, I'm going to check out the QAT framework this afternoon
@Peter
> Why do you need to use either to produce uids? Presumably you want this to be very fast? There are robust uuid libraries available that can produce unique uids without need for any interaction between instances or shared state.
Thats a really great question, this work is currently a proof of concept which will hopefully develop, at first, into an intelligent upstream cache to the application. The application currently exists (and is rather large) - the keys generated via redis are widespread and used in various other areas so currently I need to stick with this format of key generation, it is however an incredibly valid point that UUID's generated from within the system itself would open up a raft of performance (we also have limitations on the length of the UID's generated currently..)
@Brian
> I would just use redis, FWIW. In my experience, it winds up being less code and complexity. I also have instances I get better overall app performance by using redis than using shared dict, because the spin locks in the shared dict block the entire nginx process, whereas redis network calls do not. Also, redis has some nice features, so once you start using it for one thing, you start using it for many things
I'm really interested by this - do you know of anywhere that talks about the locks in detail so that I could understand the potential impact of them on performance? - do the locks only apply on write?
Also +1 on the redis usefulness - we're big fans of it currently and make a lot of use of it - the embedded lua scripting has opened up a whole new raft of possibilities as well
One question I'd love to ask you is that of performance - we have an existing application I developed which is again an upstream cache, it relies heavily on redis but in load testing it is the connection to redis itself that seems to be our lowest hanging fruit in terms of bottlenecks - we can get around 17.5K req/s into the nginx instance above which we start seing resource unavailable on the unix socket (we connect via UDS as opposed to TCP/IP) - i'd be really keen to hear of your experiences with it and if you'd done any load testing yourself? I could of course look at distributing the load across a number of redis instances, or perhaps a proxy such as twoemproxy but we're currently plenty happy with the 17,5K/sec :)
Thanks so much for the help so far guys, really appreciated
/Matt
On Saturday, 8 June 2013 12:52:25 UTC+1, Brian Akins wrote:
On Jun 8, 2013, at 12:54 AM, Peter Booth <peter...@me.com> wrote:
> Why do you need to use either to produce uids? Presumably you want this to be very fast? There are robust uuid libraries available that can produce unique uids without need for any interaction between instances or shared state.
If on Linux, this works:
io.open("/proc/sys/kernel/random/uuid"):read())
http://lua-users.org/lists/lua-l/2004-06/msg00587.html