Open Lol3rrr opened 2 years ago
Merging #102 (ba70de8) into main (d6d1312) will increase coverage by
2.80%
. Report is 74 commits behind head on main. The diff coverage is85.96%
.:exclamation: Current head ba70de8 differs from pull request most recent head 490377a. Consider uploading reports for the commit 490377a to get more accurate results
Files Changed | Coverage |
---|---|
src/aliasing.rs | 0.00% |
src/read/factory.rs | 0.00% |
src/read/guard.rs | ø |
src/utilities.rs | ø |
src/read.rs | 66.66% |
src/lib.rs | 90.90% |
src/handle_list.rs | 92.30% |
src/write.rs | 92.85% |
:loudspeaker: Thoughts on this report? Let us know!.
Fixed all the Notes from the Review and now also added the Lock-Free list, called HandleList. Also updated most of the Tests to support testing in no_std where possible, however this is not possible for the Doc-Test in the main comment so unsure if this should even be touched or just ignored because all other tests work in no_std.
Otherwise it should now be fully no_std compliant as far as I can tell and just needs some minor finishing touches in terms of internal comments and style in certain places
Hmm, I think this is going to end up leaking all the entries since there's no Drop
implementation. I wonder if the trick here is going to be using some existing crate that provides a lock free list — what do you think?
While using an existing crate for a lock free list would certainly make it a bit easier since we dont need to maintain our own implementation. However I dont know how we could then have these "snapshots" for the write handle, because the crate would need to support it or something similiar.
I think for now I might try to improve the List I wrote and see if I can improve it a bit, but if that does not work out we should consider using some other crate.
I think basically any singly-linked atomic linked list gives you "snapshot" read iterators. They sort of have to by nature — if you only add to the head, then any read has to start at the head at the time of the first read, and just walk "backwards in time" from then.
Yeah right, didnt really think about that, but I think there was a Problem, where want to iterate over the same list of handles in the Writer, while we wait for them to move on (new line where we create a new Iterator in a loop vs. old line where we create a new Iterator in the loop but based on the MutexGuard so its always the same). Thats why I create an explicit snapshot and then create an iterator based of that snapshot, because we will then always iterate over the same Handles.
If we were to just create a new Iterator for every time where we check them, there might be a new Handle added and we need to keep track of which handles we saw last time and where they are now (mapping between current list state and last seen "sub-list")
That's a good point. However, I think you'll generally find the iterators to be Clone
specifically for this reason. When you grab an iterator for a linked list, it really just holds a pointer to a head node. And for an shared reference iterator, that iterator should be Clone
so that you can iterate from that same node multiple times.
Now fixed the previous Drop-Problem and also found and fixed another minor Problem with the List-Snapshot.
I first fixed the Drop-Problem for the HandleList, by moving the actual ownership of the List into a new InnerList, which is then wrapped in an Arc by the HandleList. This makes sure that we know once InnerList is dropped, no one can still hold a reference to any of its entries. So we should be able to safely free all the entries of the List once InnerList is dropped.
This also revealed the Problem that ListSnapshot was not tied to the actual list in any way, by a lifetime or anything like that, which would have resulted in possible use-after frees with the new Drop behaviour. To address this the ListSnapshot now also stores a clone of the Arc which wraps the InnerList, this makes sure that even if every other handle to the list is dropped the snapshot itself keeps the list alive and therefore all the entries also not yet freed and we can safely access them.
Fixed most of the Comments you made for the last commit.
Although there are still some open questions for me, especially how we would want to do the removal of entries in the List, which I already touched on in one of my comments. My main concern is that we would need to either have nodes be able to represent an "empty" state, which would certainly make it more complicated and would mean that we would never be able to actually reduce our memory usage. However if we start removing entries we would need to worry about memory reclaimation, which Im sure you know can be very tricky (especially when targeting no_std i think).
Besides that there is only the small question of how we could avoid having to traverse the List twice in publish
, but thats also not of the highest priority compared to the other problem we need to solve
I think having a way to represent "unused" is the way to go here. We don't actually have to empty them, since the whole Arc<AtomicUsize>
can be re-used by another reader that comes along later! The "used" bit can just be an AtomicBool
that you compare_exchange
from false
to true
, and keep searching (or eventually allocate) if nothing is found.
You may find this code from the haphazard
crate which more or less implements the exact same thing helpful:
Maybe we could even take that implementation and extract it into its own crate so we can re-use it!
Hopefully I can find some time this week to properly look into it and integrate that into it. Will give an update then
I would love to use left-right in a no-std project of mine, but development on no-std support seems to have stalled, what is the current state of development and is there any way I could help out?
This is the corresponding for Issue #101.
Currently this only contains the changes for the yielding of the WriteHandle
This change is