Open subkanthi opened 5 months ago
LOad the binlog to a local server Errors in GTID_MODE
SET @@GLOBAL.GTID_MODE=ON_PERMISSIVE
I have no name!@00503342fd32:/$ mysqlbinlog --no-defaults /tmp/binary.074080 |mysql -u root -p
Enter password:
ERROR 1146 (42S02) at line 46: Table
I have same problem but likely due to database is updated with large transaction data. here is my log. the tool is complaining failed to deserialize EventHeaderV4{timestamp=1724185177000, eventType=TABLE_MAP
2024-08-20T20:20:40.832715274Z 2024-08-20 20:20:40.832 ERROR - Error during binlog processing. Last offset stored = {ts_sec=1724185169, file=mysql_bin.002677,7a814ecbf:1-198200204, server_id=46, event=37}, binlog reader near position = mysql_bin.002677/81376772
2024-08-20T20:20:40.835679607Z 2024-08-20 20:20:40.832 ERROR - Producer failure
2024-08-20T20:20:40.835743925Z io.debezium.DebeziumException: Failed to deserialize data of EventHeaderV4{timestamp=1724185177000, eventType=TABLE_MAP, servertion=81376978, flags=0}
2024-08-20T20:20:40.835764323Z at io.debezium.connector.binlog.BinlogStreamingChangeEventSource.wrap(BinlogStreamingChangeEventSource.java:1190) ~[app.jar:?]
2024-08-20T20:20:40.835782145Z at io.debezium.connector.binlog.BinlogStreamingChangeEventSource$ReaderThreadLifecycleListener.onEventDeserializationFailure(Bjar:?]
here is binlog causes error. I digged more log around the period where problem happen, I indeed find very long record update operation. -- exactly as the warning is indicating
WARNING: The range of printed events ends with a row event or a table map event that does not have the STMT_END_F flag set. This might be because the last stause you are using a --stop-position or --stop-datetime that refers to an event in the middle of a statement. The event(s) from the partial statement have not
ROLLBACK /* added by mysqlbinlog */ /*!*/;
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
The problem troubles me for a long time, it will be really appreciated if some one can help me out. I can offer more logs if needed. I believe a large data operation that exceed one CDC cycle is the cause of failure.
https://issues.redhat.com/browse/DBZ-7919 https://issues.redhat.com/browse/DBZ-7928