Open beasteers opened 2 years ago
Wow, that's pretty cool! :D Awesome idea @beasteers I didn't expect that a simple overloading of loads would enable functionality like that!
I'm pretty sure you can save some more performance by adding this to library core, although probably not much. And to implement this elegantly we'd need some amount of refactoring.
How about a compromise for now, would you like to write it up as a README section in a PR? And if more people find this useful we can work on adding this as a core feature.
BTW, if I had a use case like this, I would consider using a single shared object protected with multiprocessing.Lock() Then you don't need a queue at all, and you'd never run into buffer overflow (Queue.Full) issues if you're lagging behind too many objects. If you use shared memory of some sort you can even save some more time on pickling this way (because you need neither pickle nor unpickle). This would not work for last_N of course.
Also, an obvious thought, if you need only the latest object but want to keep previous objects, sounds like you need a faster-lifo not faster-fifo :D That is, a multiprocessing Stack.
I just wanted to share this recipe for how you can setup the queue to only unpickle the latest item and to skip all other items. This is useful if you have processes running at different rates and you need one process to be able to efficiently drop messages that aren't the latest message, especially applicable for real-time applications.
This can be done quite simply with a wrapper class as shown, but maybe ppl might find it as a nice general feature? wrapping it into the core would get around having to do the slightly hacky thing with
self.actual_loads
but it's no big deal either way:An addition that could also maybe(?) be useful would be to get the N latest values?