Open sethideepak opened 1 year ago
Having the same issue from 14.10 to 15.5 in AWS RDS. I had to completely delete the target instance. Testing again with 15.4
@camilb What was your outcome of your tests? We are having the same issues - crashes with AWS Aurora on 14.8.
@camilb What was your outcome of your tests? We are having the same issues - crashes with AWS Aurora on 14.8.
@ahollmann That was my mistake. One of the tables had a calculated column, which triggered a replication timeout in AWS RDS. If the column was omitted, the target db crashed and entered in a loop of recoveries. The solution was to remove the table from replication and dump/import it manually.
Thanks for your reply it was helping to fix our segfaults.
I'm currently experiencing a segmentation fault while attempting to replicate data using pglogical. The issue arises when converting two non-partitioned tables to a partitioned table during the replication process.
To address this, I've adjusted the configuration on the provider node, setting the parameter synchronize_data to false in order to skip schema structure validation:
SELECT pglogical.replication_set_add_all_tables('default', ARRAY['public'], synchronize_data := false);
Here are the details of the source and target environments:
Source:
PostgreSQL: 9.5.25 pglogical: 2.4.3 Target:
PostgreSQL: 15.5 pglogical: 2.4.3
Notably, I successfully tested the replication process on PostgreSQL 15.4 with pglogical 2.4.3 in a development environment, where it worked seamlessly.
However, upon attempting the same process on PostgreSQL 15.5 in a production environment, I encountered a replication failure for one of the tables. The specific error message is as follows:
Background worker "pglogical apply 428648:1802888491" (PID 23195) was terminated by signal 11: Segmentation fault.
I am currently investigating potential causes, considering factors such as compatibility issues, changes in schema handling, and any specific considerations related to PostgreSQL 15.5. I will proceed with debugging the segmentation fault and seek guidance from the relevant communities to resolve this issue effectively.