sampad1370 / lidgren-network-gen3

Automatically exported from code.google.com/p/lidgren-network-gen3
0 stars 0 forks source link

Lock heavy #69

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
The code seems rather lock-heavy. Have you considered making it less so?

If you targetted 4.0 specifically (maybe a lidgren-network-gen3.4) you could 
use the new System.Collections.Concurrent types for most things - at least the 
most-probable high contention points (m_receivedFragmentGroups comes to mind 
immediately) - and the NetQueue as well (there are a bunch of lock-free queue 
implementations on the net - so you can even do this for 3.5: 
http://www.boyet.com/Code/LockFreeCode.zip). 

Just some thoughts. I am busy updating the code with 4.0 types, but I can't 
make a patch (i.e. won't waste my time with SVN :)) - so if you want I will 
email the code as a zip once I am done.

Original issue reported on code.google.com by jonathan...@gmail.com on 6 Jun 2011 at 11:17

GoogleCodeExporter commented 9 years ago
Currently the library is targetting 3.5 but yes, some collections will probably 
switch over to the concurrent versions when target switches to 4.0.
However, most collections will still use locks - it's far cheaper when there's 
no actual contention, which is the most common case for the majority of the 
library collections in normal usage. Also, I'm guessing, some lock free 
collections will generate garbage where locks will not.

Original comment by lidg...@gmail.com on 6 Jun 2011 at 12:10

GoogleCodeExporter commented 9 years ago
Another alternative for concurrent collections in 3.5 is the reactive 
extensions library: http://msdn.microsoft.com/en-us/data/gg577610

Original comment by capre...@gmail.com on 21 Jun 2011 at 10:08

GoogleCodeExporter commented 9 years ago
be careful with "exotic" synchronization constructs.  I had the rude experience 
of having a lockfree queue be about 2x slower than simple lock wrapper over 
Queue<T> for most real use scenarios.

the best way to improve synchronization speed is to simply NOT synchronize so 
often.  example:  allow read/write messages in bulk from the various threads.  
it'd help a lot if you add the following override for peer.ReadMessage(ref 
ICollection<NetIncommingMessage> addIncommingMessagesTo)  that can clear the 
queue with one single lock.

Original comment by jas...@novaleaf.com on 4 Sep 2011 at 10:29

GoogleCodeExporter commented 9 years ago
It depends on your CPU architecture. If you have > 1 real core it should be 
faster to use lock free structure, where 1 core is faster with locks.

Original comment by jonathan...@gmail.com on 4 Sep 2011 at 3:29

GoogleCodeExporter commented 9 years ago
@Jonathan, that's my point.  i used a 3 core amd, and a 4 core intel in my 
tests.  both had significant performance improvments using the 'simple lock" 
method.

i had to use very specific tests for the lockfree to have better results: 
(enqueue/dequeue being performed on seperate threads)

again, my point is not that you can never use lockfree for performance 
improvements, just that you need to be careful, and that it's generally better 
to reduce synchronization instead of a trying to use some "magically cheap 
syncrhonization all the time"  because as my testing has shown, magical may not 
be as magical as you wish.

Original comment by jas...@novaleaf.com on 5 Sep 2011 at 5:23

GoogleCodeExporter commented 9 years ago
@Jason - indeed. Anything to do with threading is tricky.

If you are not enqueuing and dequeuing on separate threads there is no reason 
to use a lock-free (or even lock for that matter) in the first place; but from 
a quick gleam over his code it seemed this was happening. In general, the only 
place I have found the ConcurrentQueue to useful is for work/message backlogs 
(where a fixed set of 'n' workers provision requests).

I am not sure about the performance characteristics of ConcurrentDictionary (in 
terms of CPU count, load type, add/remove type), but I did get *marginally* 
better results with 50 load generators connecting/disconnecting continuously.

OT: I really like your Comment 3 as a general tip (never mind this scenario). 
You can do all forms of goodness with that (message backlog throttling etc.) 
and it probably really is the best way to approach this - I am definitely going 
to keep this in mind.

Original comment by jonathan...@gmail.com on 5 Sep 2011 at 1:38

GoogleCodeExporter commented 9 years ago
replaced regular critical section lock with ReaderWriterLockSlim in NetQueue 
(rev 338) - it's the only remaining lock in the library; locks are not a 
bottleneck in the library, closing issue.

Original comment by lidg...@gmail.com on 1 May 2013 at 1:32