Open nickchomey opened 5 days ago
I just tried on a different table that has 4 records, and it failed to insert 454 records... Very bizarre.
If I change the sdk.batch.size
to 500, there's no error, but nothing gets written at all.
I've tinkered with various combinations of batch.delay
and batch.size
and it does affect the number of failed/duplicate records, but there's no discernable pattern.
I have to figure it has something to do with batching, goroutines etc...
Ok, I know what is going on now- this table has 4 records with ids 455 - 458. Obviously the start row is not being set appropriately. fetchStartEnd just sets w.start to 0 if there is no value. There should probably be a mechanism to read the minimum number in the table?
Bug description
I have made a basic destination connector for surrealdb, which currently only does "INSERT" statements for the openCDC records that it receives. This only creates a record when it doesnt already exist.
Combined with my pr (#51) for using a wildcard for table names, I was able to import all of the tables (and, presumably, rows) from a fresh WordPress installation.
However, the surrealDB destination connector logs when an insert fails, and it logged 59 errors of this type.
I have manually checked and there are only 192 rows in the DB, but the destination connector is receiving 251 records, and therefore the 59 errors adds up (251-192=59)
an excerpt
Steps to reproduce
In lieu of setting up wordpress, I figure you could just try this on any mysql database and log the number of records created by the mysql source connector and compare it to how many records there actually are in the mysql db.
Version
latest commit on main (ec4da2c) + my PR commit in #51