Closed martincapello closed 2 months ago
clang-tidy review says "All clean, LGTM! :+1:"
clang-tidy review says "All clean, LGTM! :+1:"
I didn't test or review this, but just at a glance I'd like to ask if it's really needed a map? why not just a sequential vector of events? (as events must be always processed in order, but maybe I'm missing something).
Anyway seeing the original issue, I'll try to reproduce the error and see if there is another alternative (e.g. if the concurrent_queue could offer an alternative try_pop() function returning the lock to its internal mutex and we can unlock it from the client side when the if-statement ends).
I'll try to reproduce the error and see if [...]
Just as a side note this will be next week, but you can give a try to an alternative implementation doing this of the "try_pop_with_lock" thing.
I didn't test or review this, but just at a glance I'd like to ask if it's really needed a map? why not just a sequential vector of events? (as events must be always processed in order, but maybe I'm missing something).
No, it is not really needed, just something where we can store the events. I've used a map because I thought that giving the events an id and been able to find it by key was a good idea. But now I think that we could use a regular std::deque, and avoid the use of ids. I will try this and update the PR.
Anyway seeing the original issue, I'll try to reproduce the error and see if there is another alternative (e.g. if the concurrent_queue could offer an alternative try_pop() function returning the lock to its internal mutex and we can unlock it from the client side when the if-statement ends).
Okay. I just need to make that use case work. To reproduce the issue more easily yo can add a sleep(2ms) before line 82.
I'll try to reproduce the error and see if [...]
Just as a side note this will be next week, but you can give a try to an alternative implementation doing this of the "try_pop_with_lock" thing.
Not sure why should we look for another alternative if we have one that seems to work already. Also I actually like this implementation 👉 👈 ...so I hope you understand that it is difficult for me to replace it with anything else, because I put some thought, time, work, and love in this. Only if proven that this doesn't work I would be able to look for a better alternative. If you came up fix a different fix, that's fine by me, I won't go mad nor anything.
No problem, I'll try to review this next week in detail and see what's wrong with the current impl and what offer this fix.
The issue here that makes me noise is:
clang-tidy review says "All clean, LGTM! :+1:"
No problem, I'll try to review this next week in detail and see what's wrong with the current impl and what offer this fix.
The issue here that makes me noise is:
- handling two collections of events and moving between these (concurrent_queue <-> map/dequeue) instead of using just the concurrent queue for a locking/threading issue (like in other platforms).
Oh there is no more concurrent queue. It was replaced, so only one collection.
- handling events is the most performant thing that must happen (as is the first thing to process each mouse event)
I agree!
- there is a comment removed about "changing the display color profile", not sure if that was tested, probably this code is not required anymore
Will take a look at this.
Oh there is no more concurrent queue. It was replaced, so only one collection.
Oh ok, check that base::concurrent_queue<> is just what you have now: a mutex + deque.
Oh ok, check that base::concurrent_queue<> is just what you have now: a mutex + deque.
You are right, the original idea of having the mutex in the class was because I wanted to lock several steps, not only the queue, because I was trying to generate ids to keep track of the events. Since that is not actually useful, I think I can just go back to using the concurrent queue. Will try it. Thank you.
clang-tidy review says "All clean, LGTM! :+1:"
clang-tidy review says "All clean, LGTM! :+1:"
I've been reviewing this solution and one thing that annoys me is that it effectively duplicates the events in the OS queue, I mean, each OS event is pushed to the concurrent queue and then is pushed again to the OS event as an application defined event just to signal that an event should be popped from the concurrent queue. So I went back to the original code and tried a different fix.
Fix #89