Closed tfreedman closed 2 months ago
Thanks for the report. It looks like we should have never put an index on the tag values on the PG side, since there is a maximum size we can easily hit.
I do not run the postgres backend, but I'll try to get a schema change to drop this index.
In the meantime - @tfreedman if you want, you could try to drop the "tag_value_idx" index, and see if that resolves the problem, and allows the relay to startup.
We do need some kind of index here, at first I (wrongly) thought it was covered with a different one. We may want to limit tag value size, or hash large values for the index
A better short term fix will be to simply avoid unwrapping the transaction result, and just log an error for these events with super lomg tag values.
@scsibug I think i had this index dropped too
I'm just starting to play with the postgres backend more. I was able to duplicate this issue, and then resolve it by preventing the unwrap. Now it returns a generic error to the client, which is much better than panicking the writing thread.
Is this index critical for performance? I would have thought so. But I think there is some opportunity for optimizing the indexes (especially the unique_constraint_name
index, which is quite large) on the PG side.
Closing since the panic should be fixed.
Just testing nostr-rs-relay, and threw some traffic at it. I managed to get it into a state where the app is now unusable, as it will never resume listening for new traffic even when restarted. Here's the error message: