matrix-org / synapse

Synapse: Matrix homeserver written in Python/Twisted.
https://matrix-org.github.io/synapse
Apache License 2.0
11.78k stars 2.13k forks source link

Forward extremities accumulate and lead to poor performance #1760

Closed ara4n closed 4 years ago

ara4n commented 7 years ago

TLDR: To determine if you are affected by this problem, run the following query:

select room_id, count(*) c from event_forward_extremities group by room_id order by c desc limit 20;

Any rows showing a count of more than a handful (say 10) are cause for concern. You can probably gain some respite by running the query at https://github.com/matrix-org/synapse/issues/1760#issuecomment-379183539.


Whilst investigating the cause of heap usage spikes in synapse, correlating jumps in RSZ with logs showed that 'resolving state for !curbaf with 49 groups' loglines took ages to execute and would temporarily take loads of heap (resulting in a permenant hike in RSZ, as python is bad at reclaiming heap).

On looking at the groups being resolved, it turns out that these were the extremities of the current room, and whenever the synapse queries the current room state, it has to merge these all together, whose implementation is currently very slow. To clear the extremities, one has to talk in the room (each message 'heals' 10 extremities, as max prev-events for a message is 10).

Problems here are:

ara4n commented 7 years ago

More logging for resolving state groups was added in #1767 which will hopefully help explain this

ara4n commented 7 years ago

As a workaround for people with seriously fragmented rooms (e.g. @Half-Shot has 209 extremities in #mozilla_#rust:matrix.org atm):

delete from event_forward_extremities where
    room_id in (select room_id from event_forward_extremities group by room_id having count(*)>1) and
    event_id not in 
        (select max(event_id) from event_forward_extremities where
            room_id in (select room_id from event_forward_extremities group by room_id having count(*)>1)
         group by room_id);

...is a dangerous and risky and not-really-recommended solution which will remove all but the newest extremities from rooms with multiple extremities. If it leaves the 'wrong' extremity for the room, bad things could happen, however.

it's useful if your server is so hosed you can't otherwise send dummy messages into the room to heal it. it should be run whilst the server is shutdown. so far we haven't seen it make problems worse; only better.

erikjohnston commented 7 years ago

select max(event_id) from event_forward_extremities

won't necessarily give you the latest event id. It may sort of work given that synapse sends out events with an auto-incrementing integer at the front, but that won't be true across different servers.

To get the latest you'd need to compare the stream_orderings.

erikjohnston commented 7 years ago
DELETE FROM event_forward_extremities AS e
USING ( 
    SELECT DISTINCT ON (room_id)
    room_id,
    last_value(event_id) OVER w AS event_id
    FROM event_forward_extremities
    NATURAL JOIN events
    WINDOW w AS (
        PARTITION BY room_id
        ORDER BY stream_ordering
        range between unbounded preceding and unbounded following
    )
    ORDER BY room_id, stream_ordering
) AS s
WHERE
    s.room_id = e.room_id
    AND e.event_id != s.event_id
    AND e.room_id = '!jpZMojebDLgJdJzFWn:matrix.org';

is probably more how you can do it on postgres

richvdh commented 7 years ago

I've been looking at this over the last few days, as it appears to be a common cause of poor performance for many people. Conclusions so far follow.

There are two principal causes for the accumulation of extremities:

The first is your server being offline, or unreachable by other servers in the federation. This can lead to a gap in the room DAG. Your server will make an attempt to backfill when it receives events after a gap, but will cap this to 10 events, and the backfill attempt may not succeed. To some extent, this situation is to be expected. However, it is particularly nasty because the accumulation of extremities makes your server perform poorly, which makes it slow to respond to federation requests, which makes other servers more likely to consider your server offline and stop trying to send to it - thus exacerbating the problem.

The second cause is a rejected event. If your server receives an event over federation which it believes was forbidden under the auth rules of the room, it will reject it. However, if other servers in the federation accept it, then it will become part of the DAG as they see it; this means that your server will see a gap in the DAG, and the rejected event's predecessor will become a forward_extremity. This problem is also self-perpetuating, because a rejected event also causes the homeserver's view of the room state to be reset (#1935), which can lead to more rejections (and hence more forward extremities) down the line.

This second cause shouldn't really happen, because we don't expect to see rejections unless someone is doing something nefarious, because all HSes should agree on which events are allowed in the DAG. It clearly is happening though, so my current investigation is focussed on trying to pin down why. I'd also like to do something about #1935, such that when a rejection does happen (through incompetence or malice), it doesn't completely mess everything up thereafter.

richvdh commented 7 years ago

It clearly is happening though, so my current investigation is focussed on trying to pin down why

The rejections appeared to stem from the fact that the state of the room was out of sync from the very start - it looked like events were received over federation while the join was still in progress, and a race condition meant that the state ended up in an invalid, uh, state. Hopefully this will be fixed by https://github.com/matrix-org/synapse/pull/2016.

turt2live commented 6 years ago

This seems to have gotten worse (at least for me) over the last week or so. Every second day I'm having to clear extremities from t2bot.io just to keep the thing running in a reasonable fashion. No apparent consistency between rooms, just 25+ extremities for 10-15 rooms after a couple days.

elinorbgr commented 6 years ago

It still seems to be quite a problem (it is for me at least).

If that interests anyone, I monitor the evolution of these extremities with this SQL query (it is a little big because it retrieves the canonical id of offending rooms as well):

SELECT f.count, concat(f.alias, ' (', f.room_id, ')')
    FROM (
        SELECT t.room_id, t.count, se.event_id, e.content::json->'alias' AS alias
        FROM (
            SELECT room_id, count(*) 
            FROM event_forward_extremities
            GROUP BY room_id HAVING count(*)>1
        ) t 
        LEFT OUTER JOIN current_state_events AS se
            ON se.room_id = t.room_id AND se.type = 'm.room.canonical_alias'
        LEFT OUTER JOIN events AS e
            ON se.event_id = e.event_id
    ) f;
elinorbgr commented 6 years ago

From what I see, worst offenders seem to be IRC-bridged rooms with a high join/part turnover. Such as #mozilla#rust:matrix.org, #mozilla#rust-offtopic:matrix.org, and #haskell:matrix.org

Ralith commented 6 years ago

I haven't had a serious breakdown or runaway forward extremity accumulation while on #rust for several months, FWIW. It seems that either there was a specific event that triggered it which hasn't recurred in that room, or at least some of the causes have been addressed.

elinorbgr commented 6 years ago

I had no catastrophic accumulation, but these rooms sat at around 60-80 extremities.

I finally got around just leaving these rooms, and I must tell, my HS is much more responsive since I've done that.

qbit commented 6 years ago

I just had what I assume was this issue. I had multiple rooms with >6 (35 max) extremities. Synapse became completely unresponsive: 84847 _synapse 64 0 1548M 1114M onproc/0 - 18.4H 61.33% python2.7 - I have SYNAPSE_CACHE_FACTOR=0.02. This seems to happen to me about every other week and I am not in any large channels (confirmed by clearing cache on all my clients and looking at rooms I am in). I am also the only user on the homeserver.

Edit: Just hit it again. Looks like one of the room IDs that are getting extremities the matrix hq room. Is there a way to remove users from rooms via the db? Maybe I can remove them via a cron job?

rogerbraun commented 6 years ago

I experience this with the #haskell room on freenode. Is there any way to reset the room or delete it? The extremities come back as soon as I delete them.

ara4n commented 6 years ago

We just had to run this on matrix.org after lots of freenode membership churn seemingly fragmented lots of DAGs, causing a feedback loop where subsequent freenode joins got slower and slower, making freenode grind to a halt.

For the record, the query used was:

BEGIN;
SET enable_seqscan=off;
DELETE FROM event_forward_extremities AS e
USING ( 
    SELECT DISTINCT ON (room_id)
    room_id,
    last_value(event_id) OVER w AS event_id
    FROM event_forward_extremities
    NATURAL JOIN events
    WINDOW w AS (
        PARTITION BY room_id
        ORDER BY stream_ordering
        range between unbounded preceding and unbounded following
    )
    ORDER BY room_id, stream_ordering
) AS s,
    (
    select room_id from event_forward_extremities group by room_id having count(*)>1 
) AS x
WHERE
    s.room_id = e.room_id
    AND e.event_id != s.event_id
    AND e.room_id = x.room_id;
COMMIT;

...which took a few minutes to run.

turt2live commented 6 years ago

On the receiving end of many of those membership events, I've also seen extremities skyrocket. Under normal load, extremities accumulate slowly, however the last day or so has caused fairly major outages on my end :(

Valodim commented 6 years ago

Guys, this is a major problem.

I've been running a synapse instance for a year and a half now (some 50 active users, joining the typical huge channels), and the general experience is that everything is mostly fine as long as no forward extremities are accumulated, but as soon as it happens (5+), it comes out of nowhere and, grinds everything to a halt and needs manual intervention.

Really, for the first two to three months my impression of admin complexity was "just apt-get upgrade once in a while, you're good. no advanced skills necessary". This has since changed to "better know about these sql statements from that issue on the tracker, or your hs is bound to blow up sooner or later". One admin of a major HS I had talked to told me they'd pretty much just regularly schedule downtimes to run the above "dangerous and risky and not-really-recommended" query. For myself I mostly hope that I'll not be asleep when this happens so I can handle things fast enough to minimize downtimes for my users. Still this is unacceptable reliability for what people want to use as messenger, not to mention the admin load.

I don't know how much this shows on matrix.org since it's kind of in a special position, but for other HSes I cannot overstate how much of an impact this has on maintaining a synapse instance. Really, please allocate more time to this. There are workable suggestions above, maybe send dummy events to channels when there is more than a few extremities - this is pretty much what I end up doing manually every once in a while.

ara4n commented 6 years ago

Totally agreed. We're actually working on it currently as part of the current wave of synapse optimisation work triggered by running out of CPU on matrix.org last week (https://twitter.com/matrixdotorg/status/951403752522682369 etc)

The internal discussion lives at https://riot.im/develop/#/room/#matrix-core:matrix.org/$1516300707383452uirNl:matrix.org (which is mainly me failing to follow the discussion, but hey)

ara4n commented 6 years ago

I think i'm right in saying that the conclusion from the last wave of work on this was that we should just speed up state resolution so that it doesn't matter that the DAG is fragmented. @richvdh did this end up being https://github.com/matrix-org/synapse/pull/2864, and did it work?

turt2live commented 6 years ago

fwiw, I'm still seeing serious performance issues even with that last batch of work. The extremity count gets higher before making an impact though (around 20-30ish per room, ~300 overall).

richvdh commented 6 years ago

I think i'm right in saying that the conclusion from the last wave of work on this was that we should just speed up state resolution so that it doesn't matter that the DAG is fragmented. @richvdh did this end up being #2864, and did it work?

Yes it did, and I think it helped a bit (though probably not so much on matrix.org where the extremity count tends to be fairly low anyway).

The situation is that now we should only have to do state resolution when we get a state event, rather than on every event. (State resolution gets more expensive with more extremities, and with a larger room state.) For rooms where the traffic is dominated by messages rather than state, I would hope #2864 would make a reasonable difference. For rooms where there are a lot of state changes, it probably won't help much.

elinorbgr commented 6 years ago

Maybe this is a stupid idea, but would it make sense to have the HS send some kind of "noop event" in the rooms when the number of extremities gets high?

Just an event with no content (is that even possible?), just for the sake of merging the extremities.

richvdh commented 6 years ago

I thought I'd written this down somewhere, but I can't see it anywhere, so here goes:

The main reason not to have every server in the room start sending out "noop" events whenever it sees more than a few extremities is a concern that, in trying to fix things for itself, it will actually make the problem worse for everyone else. We would probably end up with quite a lot of these events and we could end up making the problem worse overall.

It's something that might be worth experimenting with a bit though.

richvdh commented 6 years ago

Ok yes that was discussed at https://riot.im/develop/#/room/#matrix-core:matrix.org/$1516300707383452uirNl:matrix.org, but that conversation got a bit sidetracked by deciding to do #2864 instead. As I wrote there though:

anyway if all the HSes in a room suddenly start generating events which heal their particular holes in the DAG, then the chances are that half of those events won't make it to half the other servers, and we end up in more of a mess than we started

Erik also suggested a solution in which we ignore all but the most recent component of connected events in the graph when calculating the current state of a room, and when sending new events (which is effectively what the query above does). The concern is that this would make it easy for somebody to take over a room by deliberately introducing a split in the DAG and giving themselves admin powers in the new component.

richvdh commented 6 years ago

https://github.com/matrix-org/synapse/issues/1760#issuecomment-356270367 isn't great because it takes several minutes to run, and it has to be done while synapse is down otherwise the results get overwritten by the cache.

An alternative approach is to select the extremities to delete into a temporary table while synapse is running, and then just shut it down to do the actual delete:

BEGIN;
SET enable_seqscan=off;
SELECT e.event_id INTO extrems_to_delete FROM event_forward_extremities AS e,
( 
    SELECT DISTINCT ON (room_id)
    room_id,
    last_value(event_id) OVER w AS event_id
    FROM event_forward_extremities
    NATURAL JOIN events
    WINDOW w AS (
        PARTITION BY room_id
        ORDER BY stream_ordering
        range between unbounded preceding and unbounded following
    )
    ORDER BY room_id, stream_ordering
) AS s,
(
    select room_id from event_forward_extremities group by room_id having count(*)>1 
) AS x
WHERE
    s.room_id = e.room_id
    AND e.event_id != s.event_id
    AND e.room_id = x.room_id;
COMMIT;

-- shut synapse down here

DELETE FROM event_forward_extremities WHERE event_id IN (SELECT event_id FROM extrems_to_delete);

-- start it up again

DROP TABLE extrems_to_delete;
rubo77 commented 5 years ago

Is this still an issue?

turt2live commented 5 years ago

Yes.

ara4n commented 5 years ago

So we have an experimental mitigation for this problem on synapse develop (https://github.com/matrix-org/synapse/pull/5480, based on the proposal at https://github.com/matrix-org/synapse/issues/5319).

If anyone is getting regularly bitten by this (@realitygaps? @Ralith?) then it would be much appreciated if you can try enabling cleanup_extremities_with_dummy_events: true on synapse develop and seeing whether it helps or not.

Ralith commented 5 years ago

I haven't been affected by this in years; on the rare occasion where I observe high extremities, they tend to go down on their own again in a few minutes.

ara4n commented 5 years ago

hum. @Valodim? surely SOMEONE must be suffering from this?!

mprasil commented 5 years ago

I was certainly affected by this but it was slowly creeping up kind of issue that would take weeks or months to have any visible impact. (But once it did, it was bad)

I suppose mostly high traffic servers would get this very often, but these are usually hesitant to run development version I guess?

benhylau commented 5 years ago

We consistently experience this @tomeshnet

ara4n commented 5 years ago

@mprasil it’s actually more common on low traffic servers, or servers which are on bad connections. extremities build up in large rooms whenever there is a federation outage if the local users aren’t speaking much. the workaround in #5480 injects dummy messages to try to reduce the problem whenever a given room has accumulated more than 10 extremities.

@benhylau can you give it a go?

Valodim commented 5 years ago

This hasn't been an issue for me recently. I used to check for extremities (and send "dummy" events to affected rooms) almost daily, but didn't have to do that in quite a while.

Extremities build up is noticeable during downtimes, e.g. during a recent reboot due to SACK. But "noticeable" here means, 2-3 rooms got 2-3 extremities, and I have a total of 2-5 in ~13 rooms.

So overall, it's not an issue for me currently.

benhylau commented 5 years ago

@ASoTNetworks is on tools rotation @tomeshnet this cycle perhaps he can give more info about how often we've been encountering this lately, and the performance issues we suffer from this. So my understanding is we'd:

  1. Check out develop containing https://github.com/matrix-org/synapse/pull/5480 on our production system
  2. Use this for a period where we'd typically have an extremities accumulation, then report back?
ASoTNetworks commented 5 years ago

@tomeshnet is currently using the matrix-synapse-py3 package on https://matrix.org/packages/debian repo.

It really depends when the extremities will build up. It can be a few weeks sometimes a few days to a few hours.

To test the develop branch would it be possible to have a release candidate for us to test in a Debian Stretch package?

kroeckx commented 4 years ago

So I was getting around 24 for a few weeks now for #mozilla_#rust:matrix.org, and today it changed to 74, and things are really slow again. This is using 1.2.1.

ara4n commented 4 years ago

@kroeckx can you turn on the cleanup_extremities_with_dummy_events: true button and see if that fixes it?

kroeckx commented 4 years ago

I already ran the query. I've now enabled it. I still see 2 rooms with 4 extremities.

kroeckx commented 4 years ago

So the rust room grew to 6 again so far.

ara4n commented 4 years ago

The way that option works is to only kick in and suppress the extremities if they grow beyond 10. Are you seeing perf issues currently?

kroeckx commented 4 years ago

I guess this is still normal. I'll let you know if I ever see it higher than 10 again.

kroeckx commented 4 years ago

So I see the values change over time, but they stay below 10.

ara4n commented 4 years ago

this is as expected. the question is whether you hit perf problems or if 10 is an adequate threshold.

kroeckx commented 4 years ago

I can't say I see performance differences. The only time I could clearly see a difference is when it was much higher.

richvdh commented 4 years ago

So it sounds like the new setting is correctly keeping rooms below 10 extremities, so it is no longer causing a performance problem?

MurzNN commented 4 years ago

On our not-too-large public ru-matrix.org server, with Synapse v1.3.1 having too much cpu/load usage, extremities count are too large:

!iEiJZbwrOzEkZNjsYf:matrix.org   - 613
!YynUnYHpqlHuoTAjsp:matrix.org   - 357
!TKuijUGqELNcsYRdMa:tchncs.de    - 309

and more than 12 rooms with >100 count! After cleaning up with sql queries, there are becomes one room with 2 count, all other with 1, after restart groves to 5-7 count, and cpu/mem/load groves down dramatically, thanks! Does we need to repeat this cleanups periodically?

aaronraimist commented 4 years ago

@MurzNN Yes or you could enable the experimental cleanup_extremities_with_dummy_events: true in your homeserver config to automatically clean them up. @ara4n can you add that to the TLDR in the issue?

neilisfragile commented 4 years ago

We no longer consider https://github.com/matrix-org/synapse/pull/5480 to be experimental and have enabled it by default in 1.4.0. So death by extremity buildup should be a thing of the past.

Feedback very much welcome. I'll leave this issue open for now in case folks still have problems.

richvdh commented 4 years ago

I think we can consider this closed now.

benhylau commented 4 years ago

Glad this one didn't make it to 2020