Closed kegsay closed 1 week ago
I think this is a duplicate of https://github.com/element-hq/synapse/issues/8917
Also: it might be better described as "backoff period" than "retry period", since synapse doesn't itself initiate any retries for /keys/claim
?
I've seen this happen again but this time between element.io <--> matrix.org, where the client on element.io saw in response to /keys/claim: "matrix.org": Object {"message": String("Failed to send request: HttpResponseException: 429: Too Many Requests"), "status": Number(503.0)}
Rate limiting this endpoint feels suboptimal...
https://github.com/matrix-org/matrix-spec-proposals/pull/4081 would be a solution to this because then the sending server could just serve up the fallback key if it cannot talk to the recipient server.
@kegsay pretty sure this isn't a 429 but really a 503 (see the status
field), which matrix.org's haproxy turns into a 429 because if a single endpoint reports 503 to CF then CF will mark the whole haproxy as down, taking out the whole service. It's a horrible hack, given it's absolutely nothing to do with rate limiting; i suspect 429 was chosen as a code which would encourage retries without the backend being marked as down.
Crypto are not actively working on this because the best solution would be to do https://github.com/matrix-org/matrix-spec-proposals/pull/4081
Another one, element.io <-> matrix.org failing with a different error: failures={"matrix.org": Object {"status": Number(503), "message": String("Failed to send request: TimeoutError: Timed out after 10s")}}
This happened again in a large E2EE room. The failure mode was subtly different though because /keys/claim
did eventually succeed for a 2nd+ message, so the error message was The message was encrypted using an unknown message index, first known index 1, index of the message 0
rather than not having the keys at all.
It's unclear to me how this is different from https://github.com/element-hq/element-meta/issues/2154, which covers the implementation of MSC4081.
Closing for now, unless someone can clarify
Description
Debugging a UTD (rageshake) and the cause of this appears to be
/keys/claim
failing with:This happens again 40 minutes later, which feels very wrong if the retry period is 40mins+.
A long retry period like this will cause UTDs because the sender cannot claim the OTK for one or more of the device's recipients.
The error message originates here which is called from here for claiming keys. This calls through to the transport layer which does post_json which shows it can throw NotRetryingDestination.
It seems to be thrown in get_retry_limiter here. The retry interval controls the duration, which seems to be persisted. This is loaded here and is modified according to:
The retry multiplier is a configurable value and retry interval defaults to the min:
self.retry_interval = self.destination_min_retry_interval_ms
. So what's matrix.org's config?Steps to reproduce
I'm guessing:
Homeserver
matrix.org
Synapse Version
Whatever was running on May 30
Installation Method
I don't know
Database
postgres
Workers
Multiple workers
Platform
?
Configuration
No response
Relevant log output
Anything else that would be useful to know?
Proposed solution here would be to ignore the backoff for
/keys/claim
requests, as if they fail it will definitely cause a UTD. If we don't want to do that, having a suitably low retry period (capped in the order of minutes) could be a viable alternative.Alternatively, I had assumed that Synapse cleared backoffs when the other HS sent something to matrix.org..? Surely this would have happened here?