Closed wilstoff closed 1 year ago
@wilstoff does the error in the args to DisconnectedEventHandler contain what you are looking for?
options.DisconnectedEventHandler += (s, args) => Console.WriteLine("Connection Disconnected ");
the event handler argument args can contain an Exception that is accessible via Error. args.Error
If you add args.Error or args.Conn.LastEx both are null. I believe this is because of the function here in Conn.cs here throws away any exception tracking that could be done if it is possible for it to reconnect via the processReconnect call:
private void processOpError(Exception e)
{
bool disconnected = false;
lock (mu)
{
if (isConnecting() || isClosed() || isReconnecting())
{
return;
}
if (Opts.AllowReconnect && status == ConnState.CONNECTED)
{
processReconnect();
}
else
{
processDisconnect();
disconnected = true;
lastEx = e;
}
}
if (disconnected)
{
Close();
}
}
If you instead turn off reconnection, the last error stored comes from the readLoop function below and is the "Server closed the connection" exception which also has no information as to why. I believe this is because if we are in the middle of reading and a client requests disconnection then it is ok because it is being brought down, and you can just throw away any partially processed messages. In the case of a forceful server disconnection, we may need a way to detect it as a slow consumer or potential slow consumer error while we are processing a large message as below.
private void readLoop()
{
// Stack based buffer.
byte[] buffer = new byte[Defaults.defaultReadLength];
var parser = new Parser(this);
int len;
while (true)
{
try
{
len = br.Read(buffer, 0, Defaults.defaultReadLength);
// A length of zero can mean that the socket was closed
// locally by the application (Close) or the server
// gracefully closed the socket. There are some cases
// on windows where a server could take an exit path that
// gracefully closes sockets. Throw an exception so we
// can reconnect. If the network stream has been closed
// by the client, processOpError will do the right thing
// (nothing).
if (len == 0)
{
if (disposedValue || State == ConnState.CLOSED)
break;
throw new NATSConnectionException("Server closed the connection.");
}
parser.parse(buffer, len);
}
catch (Exception e)
{
if (State != ConnState.CLOSED)
{
processOpError(e);
}
break;
}
}
}
While an exception within the DisconnectedEventHandler would be better than nothing, the real thing is that i believe this is an error and should raise the AsyncErrorEventHandler. I understand that may be difficult, because you are kicking off a client because they couldn't write to their buffer in a reasonable amount of time, so any message from the server along the same channel will also be dropped. I don't know if this can be solved without splitting into an admin message socket vs a normal message socket, even then it is isn't fully solvable. Having a separate admin message socket at least lessens the chance of a bad subscription from slowing down processing admin messages.
@wilstoff, when the server closes the client as a slow consumer, the client may not be able to detect that before it processes the socket close. Because NATS does not have a close protocol message, this error may be missed by the client. This would apply to all NATS clients.
You can detect this situation earlier by tuning the pending buffer of a subscriber (See NATS.Client.Options.SubChannelLength
) to alert the application before the server would disconnect the client.
Stepping back, I'd suggest chunking the data and sending using a request/reply pattern to send the file over piece by piece. This will inherently rate limit your application to provide optimal throughput while mitigating the risk of overrunning the subscriber. Using smaller chunk sizes will allow you to transfer very large blocks of data across low bandwidth network links.
I'd suggest determining a chunk count before sending, and sending a message to prepare the subscribing app with the # of chunks to expect. The subscriber then receives the chunks and pieces them together.
Thats a good suggestion i knew there were some settings that would help detect this. I think the only thing that is wrong is the fact that the client doesn't present an error to the user when autoreconnect is on. It clearly has an error it just may not understand why besides the server disconnected it. We have since reduced our max payload to 10MB to avoid this issue for now, since we don't think it is a good design to naively send a large message like this over pub sub. Maybe with req/reply we will revisit, but would definitely not want to have to build a chunking solution ourselves. We currently are sending 3-4MB protobuf messages through another messaging system and hopefully future scaling don't bloat that message.
Summary: Smaller messages will be more robust. Consider using ObjectStore to put large amounts of data in a stream. in combination with a signal message after the object has been stored to let the receiver know the large data is available and what bucket/name it is stored under.
When trying to load test C# clients with one large message ~50MB, the client has the potential to get a slow consumer warning on the server, and the client auto reconnect, but the message is dropped and no error is presented to the user. If you turn off auto connected, you see the error is in the Conn.cs readLoop function in "server closed the connection." which correlates to this log on the server side.
Slow Consumer Detected: WriteDeadline of 5s exceeded with 2 chunks of 50000025 total bytes
While i understand this is hitting up against our WriteDeadline setting on the server, (i'm running on vpn and wifi so i understand the slowness inherent in my setup), and i understand that Nats is only guaranteeing at most once delivery, the fact is that no error is notified to the consumer that it is a slow consumer is concerning. I can understand this race condition can be difficult, as i've tried on different machines and i get slow consumer errors from the client sometimes, but it seems whenever we hit the WriteDeadline limit of the server it looks as if it drops a message with no error. I believe if the server forcibly closed the connection while writing and it wasn't because of a request from the client, then it should be bubbled to client as an error even if the determination of why is not clear.
Below is the testing code and server side configurations,
OUTPUT
1 Server in cluster of 3