Open galvanidev opened 4 years ago
I realized that removing the transformations that I had applied to DATA/BEFORE were displayed.
my transformations:
"transforms": "InsertField", "transforms.InsertField.type": "org.apache.kafka.connect.transforms.InsertField$Value", "transforms.InsertField.static.field": "clientIdFromHeader", "transforms.InsertField.static.value": "123456"
But there are still duplicate UPDATE operations:
First:
{ "SCN": 16650520, "SEG_OWNER": "BANDEIRA", "TABLE_NAME": "MT_TESTE", "TIMESTAMP": 1578944080000, "SQL_REDO": "update \"BANDEIRA\".\"MT_TESTE\" set \"P_ROWID\" = NULL, \"NOME\" = NULL where \"CTPF_ROWID\" = AAASMmAANAAABbbAAA and \"PF_ROWID\" = AAASMHAANAAABXjAAA and \"P_ROWID\" = AAASMGAANAAABXbAAA and \"E_ROWID\" = AAASNaAANAAABh7AAA and \"IDCLIENTETITULARPESSOAFISICA\" = 15 and \"NOME\" = MATHEUS NELSON NOGUEIRA 2 and \"LOGRADOURO\" = R DEPUTADO NGELO JORDO", "OPERATION": "UPDATE", "data": { "CTPF_ROWID": "AAASMmAANAAABbbAAA", "PF_ROWID": "AAASMHAANAAABXjAAA", "P_ROWID": null, "E_ROWID": "AAASNaAANAAABh7AAA", "IDCLIENTETITULARPESSOAFISICA": 15, "NOME": null, "LOGRADOURO": "R DEPUTADO NGELO JORDO" }, "before": { "CTPF_ROWID": "AAASMmAANAAABbbAAA", "PF_ROWID": "AAASMHAANAAABXjAAA", "P_ROWID": "AAASMGAANAAABXbAAA", "E_ROWID": "AAASNaAANAAABh7AAA", "IDCLIENTETITULARPESSOAFISICA": 15, "NOME": "MATHEUS NELSON NOGUEIRA 2", "LOGRADOURO": "R DEPUTADO NGELO JORDO" } }
Second:
{ "SCN": 16650520, "SEG_OWNER": "BANDEIRA", "TABLE_NAME": "MT_TESTE", "TIMESTAMP": 1578944080000, "SQL_REDO": "update \"BANDEIRA\".\"MT_TESTE\" set \"P_ROWID\" = AAASMGAANAAABXbAAA, \"NOME\" = MATHEUS NELSON NOGUEIRA where \"CTPF_ROWID\" = AAASMmAANAAABbbAAA and \"PF_ROWID\" = AAASMHAANAAABXjAAA and \"P_ROWID\" IS NULL and \"E_ROWID\" = AAASNaAANAAABh7AAA and \"IDCLIENTETITULARPESSOAFISICA\" = 15 and \"NOME\" IS NULL and \"LOGRADOURO\" = R DEPUTADO NGELO JORDO", "OPERATION": "UPDATE", "data": { "CTPF_ROWID": "AAASMmAANAAABbbAAA", "PF_ROWID": "AAASMHAANAAABXjAAA", "P_ROWID": "AAASMGAANAAABXbAAA", "E_ROWID": "AAASNaAANAAABh7AAA", "IDCLIENTETITULARPESSOAFISICA": 15, "NOME": "MATHEUS NELSON NOGUEIRA", "LOGRADOURO": "R DEPUTADO NGELO JORDO" }, "before": { "CTPF_ROWID": "AAASMmAANAAABbbAAA", "PF_ROWID": "AAASMHAANAAABXjAAA", "P_ROWID": null, "E_ROWID": "AAASNaAANAAABh7AAA", "IDCLIENTETITULARPESSOAFISICA": 15, "NOME": null, "LOGRADOURO": "R DEPUTADO NGELO JORDO" } }
Apparently it's splitting DATA/BEFORE into two kafka messages. Is there any configuration to solve this?
Hi, Actually this connector only captures logminer entires which are DMLs on database. It can be Oracle behaviour . If you give some detail regarding your DMLs and junction matview table structure, i can check it out. Thanks.
Hi, Thanks so much for sharing this connector.
I am trying to use a materialized view with junctions. It is working fine for data entry, but for working with update is not working.
The UPDATE operation is sent duplicate after updating a row.
Always retrieve a sequence of operations: UPDATE/UPDATE after an update and null DATA/BEFORE: