lnobad / lidgren-network-gen3

Automatically exported from code.google.com/p/lidgren-network-gen3
0 stars 0 forks source link

add bulk reads to reduce thread synchronization cost and prevent infinite reads under high traffic #87

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. ran a test with high message send rate (1 per tight loop) + multiple clients
2.
3.

What is the expected output? What do you see instead?
expected for things to go slowly as the cpu bogged down, instead i see 
livelocks and crippling performance.  Another realization is that if my "read 
message" loop is sufficiently slow, it can become infinitely backlogged and 
thus never finish reading messages (meaning the rest of the application stalls)

What version of the product are you using? On what operating system?
latest SVN from Aug 2011.

Please provide any additional information below.

Here is a simple "bulk read/write" I added to my copy of the library that lets 
me process a fixed number of messages (all currently enqueued) in a very 
thread-optimal manner.   

=== ADD TO NETQUEUE.CS:  LINE 163 =====
        /// <summary>
        /// lock the collection for bulk read/writes
        /// </summary>
        public void Lock()
        {
            Monitor.Enter(m_lock);
        }
        /// <summary>
        /// unlock the collection from bulk read/writes
        /// </summary>
        public void Unlock()
        {
            Monitor.Exit(m_lock);
        }

=====  ADD TO NETPEER.CS LINE 186 =====

        public void ReadMessage(ICollection<NetIncomingMessage> addTo)
        {
            ReadMessage(addTo, int.MaxValue);
        }
        public void ReadMessage(ICollection<NetIncomingMessage> addTo, int maxReads)
        {
            m_releasedIncomingMessages.Lock();

            NetIncomingMessage retval;

            for (int i = 0; i < maxReads; i++)
            {
                retval = ReadMessage();
                if (retval == null)
                {
                    //no more messages left
                    return;
                }
                addTo.Add(retval);
            }

            m_releasedIncomingMessages.Unlock();

        }

===========  ADD TO NETPEER.MESSAGEPOOLS.CS LINE 172 ===========

        public void Recycle(IEnumerable<NetIncomingMessage> toRecycle)
        {
            lock (m_storagePool)
            {
                m_incomingMessagesPool.Lock();

                foreach (var msg in toRecycle)
                {
                    Recycle(msg);
                }
                m_incomingMessagesPool.Unlock();
            }
        }

Original issue reported on code.google.com by jas...@novaleaf.com on 7 Sep 2011 at 2:02

GoogleCodeExporter commented 9 years ago
Sounds like a good suggestion; I'll have a look and add something like this.
Notice however that if your application cannot process messages faster than 
they arrive you're in trouble regardless of this change.

Original comment by lidg...@gmail.com on 7 Sep 2011 at 6:44

GoogleCodeExporter commented 9 years ago
Added in revision 259; thanks for the suggestion!

Original comment by lidg...@gmail.com on 9 Sep 2011 at 8:13

GoogleCodeExporter commented 9 years ago
fyi, i only looked at the bulk recycle, but you are not doing the lock(s) for 
the group, so it's still doing the per-message synchronization my suggestion is 
trying to prevent.  

Original comment by jas...@novaleaf.com on 9 Sep 2011 at 8:51

GoogleCodeExporter commented 9 years ago
Yes, I don't like the Lock/Unlock mechanism, even tho it would work ok if used 
within a try-finally. I'll do another pass on it tho... it's both the storage 
and the incoming netqueue that's locking and unlocking unnecessarily.

Original comment by lidg...@gmail.com on 9 Sep 2011 at 10:04

GoogleCodeExporter commented 9 years ago
Updated in rev 265 to use less locks; it might improve performance slightly in 
cases where a lot of messages are bulk recycled; but the issue with infinitly 
backlogging is an application one.

Original comment by lidg...@gmail.com on 25 Sep 2011 at 10:42