Open eminden opened 8 years ago
No, not really. Thats why its still a TODO :)
At the moment the only way to share data across workers is to use the shared dict. Which is, relatively, slow so we want to avoid having to check that on the cache lookup fast path.
There's not much point getting a hit from lru-cache (fast) and then having to check the dict (slow) to see if its been expired.
At the moment my best idea is to have some kind of background timer function in each worker that checks for deleted keys in the dict and clears them from the lru-cache. This obviously would be asynchronous and there'd be a gap between deleting something and it being deleted from all the lru-cache in all the workers.
I haven't put a huge amount of thought into it though, there might be a better way...
This is a fairly common problem. Currently I have a similar implementation with the same inefficiency problems. In our case I am not yet using this library. But I think I could with this feature, and it implemented efficiently.
Currently I am using a background timer to go through the shared dictionary, and move things into the worker. Currently I dont have the reverse implemented (i.e ttl extension) as the cost of doing a set with a large string is quite high (need a ttl specific set method).
I'd be happy to help out, Ive done some modifications to ngx_lua before. I dont have alot of free time currently, but can advise, test or someting.
Yeah my current use case doesn't actually require delete / flush functionality at all, stuff in cache is valid forever. So I haven't tried too hard to solve the synchronisation issue.
It seems like there are plans to extend the shared dict functionality in ngx_lua in the future to add more advanced functionality. (e.g. Lists and sets similar to Redis) So maybe at some point there'll be a pubsub mechanism or something akin to Redis' keyspace events feed that could solve this problem nicely. Implementing that myself is a bit beyond my C skills at the moment tho :)
The other thing I'd like to see is a command to return the TTL of a key in the shared dict. This would solve the problem of having to store that separately in order to accurately set the TTL in the lru cache. I did write a proof of concept for ngx_lua that added such a command, but there was already a PR open on ngx_lua to add something similar so I never submitted it to agentzh.
Thanks for getting back to me.
Nice, I just found the rbtree PR thats neat. I might need to see if I can use that (currently I have lots of libs that json encode tables for storage).
I may end up implementing ngx.shared.DICT:set_ttl(key, ttl) could you link me your PR so I can look at it for consistency? I didnt see it, although its easy to miss some @agentzh has PR's stacking up.
I never actually submitted it as there was another PR from someone else that had a similar change and some other changes as well.
My change was to add a DICT:ttl(key) function that returns the TTL rather than setting it.
https://github.com/hamishforbes/lua-nginx-module/commit/7b76e6b2b2318a53af623e3c809b033ecf37312f
just ran into this library while searching for multi level caches. We're having a slightly different use case (db lookup) so even shm lookups can be considered fast.
Anyway, for synchronisation we use: https://github.com/Mashape/lua-resty-worker-events
I came across the same problem as you have not implemented yet on this library described at the following TODO on the readme,
Syncronise LRU cache delete / flush across workers
Could you help me to understand the idea howto manipulate the variables on all workers?