Closed aborschev closed 1 year ago
Hi @aborschev ; thanks for reporting this issue. The problem we face is that in some cases the logical decoding test_decoding plugin skips the old-key
part of the UPDATE statement entirely. The client parts (pgcopydb, here) is then supposed to maintain an in-memory cache of all the primary keys of all the tables that receive UPDATE statements in order to be able to make sense of the UPDATE messages.
At the moment such a feature is not implemented in pgcopydb, which only knows how to parse complete UPDATE messages in the logical decoding output from test_decoding.
Experiencing similar issue. Tried to use wal2json
plugin instead => all seemed to work flawlessly, until I cared to look into cdc/xxxx.sql
and found the following update statement there:
UPDATE ... SET ... WHERE "id" = 3.55061e+07;
Turns out it converted 35506143
bigint into a floating point!
However, checking corresponding JSON file:
{
"action": "U",
"xid": "635103562",
"lsn": "B69/82588E0",
"timestamp": "2023-04-20 11:17:02.410232+0000",
"message": {
"action": "U",
"xid": 635103562,
"schema": "public",
"table": "xxx",
"columns": [
{ "name": "id", "value": 35506143 },
// ...
],
"identity": [{ "name": "id", "value": 35506143 }]
}
}
Should I open an issue?
This seems to be related: https://github.com/dimitri/pgcopydb/issues/127
Hi! I've got a test stand, with 2 VMs Debian Bulleye 4 CPU 4 GB RAM, one is with source DB, another-target. pgcopydb 0.11 is on target host.
Source: PG15 vanilla. 10 GB of some data + pgbench. I have emulated workload with pgbench:
I tried pgcopydb both with --plugin=test_decoding.
If You propose to do some tests or ask for more diagnostics - I will try to do these tests with this setup.