Open MTRNord opened 4 months ago
the obvious question here is: how did this get accepted into the original database in the first place?
edit: I guess your server you dumped from has a larger page size? I have never configured that in Postgres but I imagine it is possible
I think it might have just been silently broken all the time as the index was probably created before the event was received. A reindex on the database before the dump with the same pagesize settings actually does work too. I assume postgres just skips broken rows at runtime to prevent downtime? It only became an issue when dumping it and reimporting on a fresh server.
Oh and since I missed to link to it: https://github.com/matrix-org/synapse/pull/12101 does limit this key already. So this issue is only about historic data people may have received before that PR. It cant happen again after a version with that PR afaik.
I assume postgres just skips broken rows at runtime to prevent downtime?
I would be kind of surprised if it did this to be honest, it seems uncharacteristic.
Description
When dumping and reimporting, postgres does a new index. these are size limited. At some point someone sent an event with ascii art in aggregation_key which landed in my public.event_relations table. This is 3497 chars long in size.
As a result this happens on any dump + import:
Steps to reproduce
Homeserver
matrix.midnightthoughts.space
Synapse Version
v1.106.0
Installation Method
Docker (matrixdotorg/synapse)
Database
Postgres 15
Workers
Single process
Platform
Kubernetes with a pg cluster
Configuration
No response
Relevant log output