Closed notsure2 closed 3 years ago
I don't think sync.Map
is necessary here. It's a simple case and contentions should be rare, so just a sync.Mutex
should do. This doesn't really fall under either of the two cases where the doc recommends using sync.Map
either.
The deadline logic appears fine to me
@cbeuw It's necessary because multiple goroutines waiting on multiple streams from different sessions can time out at the same time and attempt to modify the map at the same time. In addition the map can be modified by an incoming connection if it tries to write to a session that was broken for some other reason.
The overhead is not that big, and I'd rather be 100% safe and have it reliable than leaving things to chance.
sync.Mutex might be fine, but it's more code to write and there will be more repetition, and starts to be become more error prone, what's wrong with using a collection that handles synchronization internally ?
We don't know how Cloak is going to be used, it can be put in situations where there's thousands of open UDP connections and sessions and streams. sync.Map is robust for this purpose.
Upon closer look I think this code has fallen into a common race condition pattern called time-of-check to time-of-use. Because we are only ever concurrently deleting, not adding, streams into the map, the code shouldn't cause big problems. But it may attempt to write to a stream that has been deleted from the map, then delete the same key from the map twice (which is thankfully allowed by sync.Map
). There will be bigger problems if someone comes along and add streams concurrently.
To be strictly correct, the entire block from loading and checking if the value is present, up to until storing the newly created stream to the map, needs to be protected by a lock: https://github.com/cbeuw/Cloak/blob/874d30c5124e69312d09ca144bae699ef0cc71f7/internal/client/piper.go#L36-L53. There probably needs to be an else branch to handle the locking logic when the stream was existing too.
This is because the values in the map may change after we fetched ok
from the load operation. If it was present, it could be deleted after the check (which is what's happening now); if it was absent, it could be added into the map by another goroutine (which isn't happening yet, but the pattern is there).
On the topic of mutex: lock is an extremely common synchornisation primitive. To protect accesses to memory regions shared by concurrent threads, using mutex lock is a go-to pattern. It is in fact rarer to use high-level thread-safe data structures, because they generally have a lot of subtleties which make them more appropirate in niche senarios. There are some other reasons for which I try to avoid using sync.Map
in Cloak's codebase
sync.Map
isn't type-safe - at least not until Go 1.17 where they porentially add genericssync.Map
was first added to the standard lib, the doc was written pretty unambiguously in telling people to avoid it in almost all senarios. They have toned that down recently (last year maybe?) but the recommended use cases are still the samesync.Map
you have to juggle an interface{}
around and that's really ugly - entirely due to the lack of generics in GoAnd there are some stronger arguments presented here: https://medium.com/@deckarep/the-new-kid-in-town-gos-sync-map-de24a6bf7c2c
OK I think we'll never run into the situation in which we may be concurrently adding new streams to the map, as that would require us copying everything we just received from localConn
into a goroutine-local buffer, almost certainly not worth it. I do still think using mutex is better than sync.Map
though
Ok, I'll change it to use a mutex and regular map.
@cbeuw can you please check now.
Fixes #147 and also the same case for multiplex, and also a memory leak in case a session sends a packet once and then times out (the old stream would be stuck in the map forever).
@cbeuw Please review and merge thanks 👍