Closed fschoell closed 1 year ago
@fschoell The PK in the sink is the primary key, so it is definitely intended behaviour that it should fail if you try to an insert multiple times with the same primary key.
I see that the code always uses &blk_stats.chain.clone()
as the primary key. This value should instead be unique to the row itself (usually the value given as the key
in the delta
).
@fschoell The PK in the sink is the primary key, so it is definitely intended behaviour that it should fail if you try to an insert multiple times with the same primary key.
I see that the code always uses &blk_stats.chain.clone() as the primary key. This value should instead be unique to the row itself (usually the value given as the key in the delta).
Yes that's on purpose. Those tables should only hold one row that contains the max transaction/action count found within all blocks so far. So the PK is always the same. On the first write to the StoreMaxInt64
the db_out
function should generate a Create (INSERT INTO
) database operation and whenever a new max value is found, it should generate an Update (UPDATE
) database operation.
ah ok the key is different so there will be multiple creates I guess (one per key, not one per entry). Makes sense
Testing a Substream that retrieves deltas from a StoreMaxInt64 and translates them into database operations. See here for details. Table schema can be found here.
The sink fails however for some reason as it tries to insert the data with the same PK twice (instead of doing a
INSERT INTO
first and then anUPDATE
transaction subsequently).Logs from the Postgres sink: