Open heckj opened 2 months ago
Learned from @pvh that this may be expected. He suggested adding relevant tests in relevant test: https://github.com/automerge/automerge-repo/blob/6ef25d8f300133bfefc71b44369cc222bcb43f0c/packages/automerge-repo/test/Repo.test.ts#L207 if it’s kicking out an Unavailable message before checking storage to see if it’s available.
Working on the automerge-repo-swift implementation, and ran into a bit of a snag and unexpected responses. I'm running the simple code for automerge-repo-sync-server, but verified the same behaviour from
sync.automerge.org
The versions involved here: "@automerge/automerge": "^2.1.13", "@automerge/automerge-repo": "^1.1.5", "@automerge/automerge-repo-network-websocket": "^1.1.5", "@automerge/automerge-repo-storage-nodefs": "^1.1.5",
The code for the sync-server is being run in a docker container locally, per PR https://github.com/automerge/automerge-repo-sync-server/pull/7
I've created an integration test that does the following:
In the traces, I'm joining the repo, accepting the peer message, sending a request for the documentID I get two WebSocket message responses, the first an UNAVAILBLE message, and the second a SYNC message with the contents from the server:
Test log snippet from my tracing/diagnostics
At the moment, I'm taking the first
UNAVAILABLE
response as a declarative result and marking the document as such, which is then catching an assertion for being in an unexpected state when I later get aSYNC
message attempting to update it.I believe the sending of the first
UNAVAILABLE
message is a bug in the current implementation.