oskarnie14 / lidgren-network-gen3

Automatically exported from code.google.com/p/lidgren-network-gen3
0 stars 0 forks source link

"Socket threw exception; would block - send buffer full? Increase in NetPeerConfiguration" in file stream sample #117

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
When streaming a file to another computer on my local wifi network using the 
File Stream sample from latest SVN (r289), I'm getting hundred of times the 
following log message:

"Socket threw exception; would block - send buffer full? Increase in 
NetPeerConfiguration"

The stream doesn't progress at all, the connection times out, and then I get 
many "Received unhandled library message Acknowledge from ..." log messages.

I've been trying to debug this issue in my own app until I realized the same 
issue occurs with the sample.

A couple things to note:
 * It fails no matter which of the two computers is the client or server
 * I tried with r275 (december 2011) and r256 (august 2011) too, it fails with both.
 * I tried with a wired connection and _it works just fine_

Original issue reported on code.google.com by elise...@gmail.com on 14 May 2012 at 8:51

GoogleCodeExporter commented 9 years ago
Have you tried increasing the send buffer size in NetPeerConfiguration?

Original comment by lidg...@gmail.com on 26 May 2012 at 8:26

GoogleCodeExporter commented 9 years ago
Yes, to very high values. Didn't fix the problem.

I've been digging through the library with the debugger, trying to find the 
root cause too but I haven't been able to pinpoint it.

Original comment by elise...@gmail.com on 28 May 2012 at 2:46

GoogleCodeExporter commented 9 years ago
Did anyone ever figure out what was going on here? I'm having a similar issue 
on a virtual machine running in softlayer, but the problem never happens on my 
macbook pro. I have a really large send buffer (~900k), so I don't think that 
is the issue. I am using this in a server environment, but I've seen the issue 
with a single user logged in. 

Original comment by mikeco...@playfulcorp.com on 7 May 2014 at 3:15

GoogleCodeExporter commented 9 years ago
I worked around it by adding a count of messages awaiting acknowledgement to 
the NetReliableSenderChannel (see 
https://bitbucket.org/sparklinlabs/lidgren/commits/2879199834609a5db62ac3ad6bdee
2c33e79a1f9) and throttling based on that count:

int MaxUnackedMessages = ...; // I'm using a value of 4 by default IIRC

// ...

int windowSize, freeWindowSlots;
connection.GetSendQueueInfo( NetDeliveryMethod.ReliableOrdered, channelIndex, 
out windowSize, out freeWindowSlots );

if( freeWindowSlots > 0 && Connection.GetReliableMessagesAwaitingAckCount( 
channelIndex ) < MaxUnackedMessages ) ) {
    // Send more stuff
}

That seems to work pretty well.

Original comment by elise...@gmail.com on 7 May 2014 at 7:15

GoogleCodeExporter commented 9 years ago
Woops, there's an extra parenthesis in my if statement above, doesn't seem like 
I can edit it out.

Original comment by elise...@gmail.com on 7 May 2014 at 7:17

GoogleCodeExporter commented 9 years ago
Thanks for the workaround. I'll give it a shot.

Original comment by mikeco...@playfulcorp.com on 8 May 2014 at 6:03

GoogleCodeExporter commented 9 years ago
Ack, comments on issues don't generate notifications so you don't see them 
until you go thru all issues once in a while.
Anyway, checking freeWindowSlots should really be enough... does the library 
speed sample also run into the same problem you describe?

Original comment by lidg...@gmail.com on 10 Oct 2014 at 7:59

GoogleCodeExporter commented 9 years ago
I reported this a long time ago so details are fuzzy in my head but yes, it was 
definitely happening both with my code and with the file (or image?) transfer 
sample from the repository.

Original comment by elise...@gmail.com on 10 Oct 2014 at 8:49

GoogleCodeExporter commented 9 years ago
Rev 382 added NetConnection.CanSendImmediately() to avoid mucking about with 
GetSendQueueInfo. Also tweaked code to better report Sent/Queued correctly.

Original comment by lidg...@gmail.com on 10 Oct 2014 at 3:37

GoogleCodeExporter commented 9 years ago
I'm still having a lot of problems with this stuff. I did find that this error 
message went away if I disabled the auto MTU size. But I'm still getting 
timeouts when sending a lot of data.

Using the acknowledgement code above works - but then on fast connections the 
transfer is very slow. Without it, and using CanSendImmediately the connection 
times out after a few seconds of receiving data.

Original comment by garrynewman@gmail.com on 23 Jan 2015 at 10:31

GoogleCodeExporter commented 9 years ago
How often do you run the sending/throttling code? If you sync this with 
framerate and your framerate is low you won't get a lot sent (basically fps * 
windowsize * mtu bytes per second).

Original comment by lidg...@gmail.com on 23 Jan 2015 at 3:05

GoogleCodeExporter commented 9 years ago
I may have isolated this issue. I was facing exactly the same problem as 
described here, and also with the out-of-the-box file streaming example. 
However, in my case at least only the release build exhibited the issue. Using 
this as a clue I modified the code as follows:

//#if DEBUG
                SendDelayedPackets();
//#endif
// (of course also uncomment all required code for this in 
NetPeer.LatencySimulation)

And the problem vanishes. I haven't examined this in detail but perhaps this 
helps Michael out...

Original comment by jmkin...@gmail.com on 8 May 2015 at 4:12

GoogleCodeExporter commented 9 years ago
Ah, and just to be clear the problem is more as described in later comments, 
e.g. #10. I don't see exceptions (I'm assuming of course the file transfer 
example is logging them properly). The connection simply times out. If I 
disable the timeout then it's clear the transfer rate has slowed to a nearly 
(but not quite) a standstill. Under a debugger this is showing up as very 
rarely true: m_connection.CanSendImmediately(NetDeliveryMethod.ReliableOrdered, 
1)

Original comment by jmkin...@gmail.com on 8 May 2015 at 4:17