Closed cconstab closed 2 years ago
To create a bad network locally in the past i have used
https://m0n0.ch/wall/index.php
I will see if I can provide a cheatsheet to setting that up as a local VM for whoever wants to look at this tricky issue
Made progress on this issue:
_syncInProgress
remains in true state even after the AtTimeoutException
is thrown by the server. This leads to a new sync process not syncing keys (since the _syncInProgress
remained in true, it assumes that previous sync is still running). Attaching the error snippet
OutboundMessageListener._read (outbound_message_listener.dart:66)
OutboundMessageListener._read.<anonymous closure> (outbound_message_listener.dart:82)
_rootRunUnary (zone.dart:1434)
<asynchronous gap>
AtLookupImpl._process (at_lookup_impl.dart:457)
<asynchronous gap>
AtLookupImpl._stats (at_lookup_impl.dart:320)
<asynchronous gap>
AtLookupImpl.executeVerb (at_lookup_impl.dart:246)
<asynchronous gap>
RemoteSecondary.executeVerb (remote_secondary.dart:39)
<asynchronous gap>
SyncUtil.getLatestServerCommitId (sync_util.dart:88)
<asynchronous gap>
SyncServiceImpl._getServerCommitId (sync_service_impl.dart:544)
<asynchronous gap>
SyncServiceImpl._isInSync (sync_service_impl.dart:525)
<asynchronous gap>
SyncServiceImpl._processSyncRequests (sync_service_impl.dart:164)
<asynchronous gap>
SyncServiceImpl._scheduleSyncRun.<anonymous closure> (sync_service_impl.dart:80)
<asynchronous gap>
_ScheduledTask._run.<anonymous closure> (cron.dart:1)
<asynchronous gap>
Spent 5 SP in PR-40
Enhanced the outbound message listener to handle partial responses and responses sent in multiple packets under slow network conditions. PR-185. Burned 3 SP for this.
Total 8SP burned in PR-40
The work is getting tracked in the following Git issue: https://github.com/atsign-foundation/apps/issues/552
Describe the bug If network between at_client and the secondary is busy/congested then bad things can happen.
To Reproduce Steps to reproduce the behavior:
Expected behavior Either a clear error message and retries or protocol should handle thin pipes maybe detect the lower bandwidth and chunk/wait accordingly.
This is VERY important to IoT hence P2 in my mind
Error trace
And a hung process (I had to hit cntrl C)
Then after I remove the network load things work fine
Were you using an @application when the bug was found?
Additional context Add any other context about the problem here.