Open vizakgh opened 1 week ago
Taking a look at this right now.
Yup 💯 Looking at fix for this.
problem is in message enquee (channel write/read). I replaced EnqueueSendMessage and ProcessSendQueue by direct _clientWebSocket.SendAsync and all work correctly.
Yup, found the same problem and working on the fix.
P.S. you can't do this: lock (_mutexSend) { Log.Verbose("SendBinaryImmediately", "Sending binary message immediately.."); if (length == -1) { length = data.Length; }
_clientWebSocket.SendAsync(new ArraySegment<byte>(data, 0, length), WebSocketMessageType.Binary, endOfMessage: true, _cancellationTokenSource.Token).ConfigureAwait(continueOnCapturedContext: false);
}
we can't do another send on the same web socket. but you don't wait SendAsync. _mutexSend will be unlocked before SendAsync will finish and we can enter this code again and have second send (first send still in progress) on the same socket.
I refactored this method in my project: public async Task SendBinaryImmediately(byte[] data, int length = Constants.UseArrayLengthForSend) { if (!await _mutexSend.WaitAsync(SEND_MUTEXT_TIMEOUT, _cancellationTokenSource.Token)) { Log.Error("SendBinaryImmediately", "Mutex timeout"); return; }
try
{
Log.Verbose("SendBinaryImmediately", "Sending binary message immediately.."); // TODO: dump this message
if (length == Constants.UseArrayLengthForSend)
{
length = data.Length;
}
await _clientWebSocket.SendAsync(new ArraySegment<byte>(data, 0, length), WebSocketMessageType.Binary, true, _cancellationTokenSource.Token);
}
finally
{
_mutexSend.Release();
}
}
Yep, that was my conclusion as well. There is a PR open @vizakgh if you want to take a look
hi. thanks. I can't find in PR fix for this: https://github.com/deepgram/deepgram-dotnet-sdk/issues/344#issuecomment-2436390253
you can reproduce this issue using big data chunks in SendAsync and slow network.
Should be in the link in the issue here https://github.com/deepgram/deepgram-dotnet-sdk/pull/345
The change is significant because of the backward compatibility guarantees that we need to keep
You need to either use:
v2
of the Listen WS Client (this also affects TTS WS as well)
Deepgram.Clients.Listen.v2.WebSocket
: https://github.com/deepgram/deepgram-dotnet-sdk/pull/345/files#diff-c270781c56253048bf4dc89b737a52466dd35f015eda401aa8b541bfdb67bc48R5Deepgram.Models.Listen.v2.WebSocket
Hi @jcdyer Have you been able to verify this PR addresses your issue?
What is the current behavior?
deadlocks on machine with CPU < 8
Steps to reproduce
run client on 2 CPUs machine
Expected behavior
no deadlocks
Please tell us about your environment
azure web app 2 vCPUs - deadlocks the same with 8 CPUs - no deadlocks.
Other information
investigation in progress. probably issue is in
void StartSenderBackgroundThread() => = Task.Factory.StartNew( => ProcessSendQueue(), TaskCreationOptions.LongRunning);
void StartReceiverBackgroundThread() => = Task.Factory.StartNew( => ProcessReceiveQueue(), TaskCreationOptions.LongRunning);
void StartKeepAliveBackgroundThread() => = Task.Factory.StartNew( => ProcessKeepAlive(), TaskCreationOptions.LongRunning);
void StartAutoFlushBackgroundThread() => = Task.Factory.StartNew( => ProcessAutoFlush(), TaskCreationOptions.LongRunning);