element-hq / synapse

Synapse: Matrix homeserver written in Python/Twisted.
https://element-hq.github.io/synapse
GNU Affero General Public License v3.0
1.18k stars 147 forks source link

Very long query times for get_auth_chain_difference_chains #17129

Open poettig opened 4 months ago

poettig commented 4 months ago

Description

Since upgrading to v1.105.1 because of the security patch, we are seeing very high transaction times specifically for the transaction "get_auth_chain_difference_chains". In our case, we directly upgraded from v1.102.0 to v1.105.1, but heard from others with the same problem after upgrading from v1.104.0.

This could be a regression caused by https://github.com/element-hq/synapse/pull/17044 which modified code called through several indirections by "get_auth_chain_difference_chains" or a regression caused by the security patches.

image

Steps to reproduce

Homeserver

kit.edu

Synapse Version

1.105.1

Installation Method

Debian packages from packages.matrix.org

Database

postgresql 13.14-0+deb11u1

Workers

Multiple workers

Platform

root@main-1:~# cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

root@main-1:~# uname -a
Linux main-1 5.10.0-28-amd64 #1 SMP Debian 5.10.209-2 (2024-01-31) x86_64 GNU/Linux

Synapse Main, Workers and the database all run in virtual machines.

Configuration

Presence is enabled.

Relevant log output

I'm not sure which logs would be helpful here and I can't find any relevant log entries in the affected code. Can add logs if you give me a hint which contexts I have to set to debug :)

Anything else that would be useful to know?

No response

rda0 commented 4 months ago

I believe we have the same problem. We noticed increased CPU usage after the upgrade from 1.104.0 to 1.105.1. This occurs occasionally for a few minutes.

cpu_usage_matrix_1 105 1

2024-04-26 17_08_23-phd-matrix synapse prometheus 20231213 - linux - Dashboards - Grafana — Mozilla

Homeserver: phys.ethz.ch Synapse Version: 1.105.1 Installation Method: pip Database: postgresql-15 15.6-0+deb12u1 Workers: Multiple workers Presence: disabled Platform:


+phd-matrix:~# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
+phd-matrix:~# uname -a
Linux phd-matrix 6.1.0-12-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.52-1 (2023-09-07) x86_64 GNU/Linux
csett86 commented 4 months ago

Same here on a semi-large homeserver deployment. Issues started with 1.105.0, get_auth_chain_difference_chains running for minutes instead of seconds. It persistet through 1.105.1:

Bildschirmfoto 2024-04-26 um 23 59 07

This evening I took a chance and reverted #17044 locally to see if it would help, but then federation breaks, so dont do it:

  File "synapse/storage/databases/main/events.py", line 464, in _persist_events_txn
    self._persist_event_auth_chain_txn(txn, [e for e, _ in events_and_contexts])
  File "synapse/storage/databases/main/events.py", line 562, in _persist_event_auth_chain_txn
    self._add_chain_cover_index(
  File "synapse/storage/databases/main/events.py", line 779, in _add_chain_cover_index
    for links in EventFederationStore._get_chain_links(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: type object 'EventFederationStore' has no attribute '_get_chain_links'
csett86 commented 4 months ago

One more update: By reverting #17044, but keeping the class method in place for the security fix 55b0aa847a61774b6a3acdc4b177a20dc019f01a, I now have a synapse that is back to normal for get_auth_chain_difference_chains:

Bildschirmfoto 2024-04-27 um 17 21 04

From what I have spotted in #17044 so far:

The first usage of the query in the previous code:

  1. First builds the links: https://github.com/element-hq/synapse/pull/17044/files#diff-1f5d8bffadd3271e42c2f6a66474bbf5e3e6694b009aa616b5f5433506217ff1L320-L330
  2. Then runs some functions on it: https://github.com/element-hq/synapse/pull/17044/files#diff-1f5d8bffadd3271e42c2f6a66474bbf5e3e6694b009aa616b5f5433506217ff1L332-L336
  3. Then does the difference_update: https://github.com/element-hq/synapse/pull/17044/files#diff-1f5d8bffadd3271e42c2f6a66474bbf5e3e6694b009aa616b5f5433506217ff1L338

In the new code, as the yield is at the end, I think step 2 and step 3 are switched (so first building links, then running difference_update, then step 2 from above).

On the second usage, the difference_updateis inside the inner set_to_chain loop https://github.com/element-hq/synapse/pull/17044/files#diff-1f5d8bffadd3271e42c2f6a66474bbf5e3e6694b009aa616b5f5433506217ff1L621, while with the new code its running at the end of the outer loop and using links as its parameter.

I am not sure which of the two usages is causing the long running queries, but reverting both to the earlier version helped to get back to normal. From my limited understanding at least the yield and the difference_update in the new class method probably need to be in the reverse order to get closer to the old implementation.

TheDidek commented 3 months ago

For me usage jumped crazily completely clogging the database lately after v1.105.1

swappy-20240430_141654

obraz

obraz

heftig commented 3 months ago

It looks like the refactoring missed that the chains added by _materialize to the chains dict should be removed from chains_to_fetch, so I guess it keeps requesting the same chains over and over, leading to a quadratic explosion.

heftig commented 3 months ago

I've also partially reverted #17044 with apparent success. This is the diff:

Diff ```diff diff --git c/synapse/storage/databases/main/event_federation.py w/synapse/storage/databases/main/event_federation.py index fb132ef09..97ac07c3a 100644 --- c/synapse/storage/databases/main/event_federation.py +++ w/synapse/storage/databases/main/event_federation.py @@ -280,16 +280,64 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas # Now we look up all links for the chains we have, adding chains that # are reachable from any event. + # + # This query is structured to first get all chain IDs reachable, and + # then pull out all links from those chains. This does pull out more + # rows than is strictly necessary, however there isn't a way of + # structuring the recursive part of query to pull out the links without + # also returning large quantities of redundant data (which can make it a + # lot slower). + sql = """ + WITH RECURSIVE links(chain_id) AS ( + SELECT + DISTINCT origin_chain_id + FROM event_auth_chain_links WHERE %s + UNION + SELECT + target_chain_id + FROM event_auth_chain_links + INNER JOIN links ON (chain_id = origin_chain_id) + ) + SELECT + origin_chain_id, origin_sequence_number, + target_chain_id, target_sequence_number + FROM links + INNER JOIN event_auth_chain_links ON (chain_id = origin_chain_id) + """ # A map from chain ID to max sequence number *reachable* from any event ID. chains: Dict[int, int] = {} - for links in self._get_chain_links(txn, set(event_chains.keys())): + + # Add all linked chains reachable from initial set of chains. + chains_to_fetch = set(event_chains.keys()) + while chains_to_fetch: + batch2 = tuple(itertools.islice(chains_to_fetch, 1000)) + chains_to_fetch.difference_update(batch2) + clause, args = make_in_list_sql_clause( + txn.database_engine, "origin_chain_id", batch2 + ) + txn.execute(sql % (clause,), args) + + links: Dict[int, List[Tuple[int, int, int]]] = {} + + for ( + origin_chain_id, + origin_sequence_number, + target_chain_id, + target_sequence_number, + ) in txn: + links.setdefault(origin_chain_id, []).append( + (origin_sequence_number, target_chain_id, target_sequence_number) + ) + for chain_id in links: if chain_id not in event_chains: continue _materialize(chain_id, event_chains[chain_id], links, chains) + chains_to_fetch.difference_update(chains) + # Add the initial set of chains, excluding the sequence corresponding to # initial event. for chain_id, seq_no in event_chains.items(): @@ -579,16 +627,61 @@ class EventFederationWorkerStore(SignatureWorkerStore, EventsWorkerStore, SQLBas # Now we look up all links for the chains we have, adding chains that # are reachable from any event. + # + # This query is structured to first get all chain IDs reachable, and + # then pull out all links from those chains. This does pull out more + # rows than is strictly necessary, however there isn't a way of + # structuring the recursive part of query to pull out the links without + # also returning large quantities of redundant data (which can make it a + # lot slower). + sql = """ + WITH RECURSIVE links(chain_id) AS ( + SELECT + DISTINCT origin_chain_id + FROM event_auth_chain_links WHERE %s + UNION + SELECT + target_chain_id + FROM event_auth_chain_links + INNER JOIN links ON (chain_id = origin_chain_id) + ) + SELECT + origin_chain_id, origin_sequence_number, + target_chain_id, target_sequence_number + FROM links + INNER JOIN event_auth_chain_links ON (chain_id = origin_chain_id) + """ + + # (We need to take a copy of `seen_chains` as we want to mutate it in + # the loop) + chains_to_fetch = set(seen_chains) + while chains_to_fetch: + batch2 = tuple(itertools.islice(chains_to_fetch, 1000)) + clause, args = make_in_list_sql_clause( + txn.database_engine, "origin_chain_id", batch2 + ) + txn.execute(sql % (clause,), args) + + links: Dict[int, List[Tuple[int, int, int]]] = {} + + for ( + origin_chain_id, + origin_sequence_number, + target_chain_id, + target_sequence_number, + ) in txn: + links.setdefault(origin_chain_id, []).append( + (origin_sequence_number, target_chain_id, target_sequence_number) + ) - # (We need to take a copy of `seen_chains` as the function mutates it) - for links in self._get_chain_links(txn, set(seen_chains)): for chains in set_to_chain: for chain_id in links: if chain_id not in chains: continue _materialize(chain_id, chains[chain_id], links, chains) + chains_to_fetch.difference_update(chains) seen_chains.update(chains) # Now for each chain we figure out the maximum sequence number reachable ```
jameskitt616 commented 3 months ago

I see similar behavior since v1.105.0 on my synapse grafana dashboard. I've also noticed that the disk writes are now constantly at 100-200MB/s, which causes the postgres service to gather multiple GB's or disk writes in a few minutes (as you can see by this screenshot made in iotop).

image