Closed telezhnaya closed 1 year ago
We may init absolute value lazily
draft of sql query to update values later, for future me:
with t as (select event_index, affected_account_id, contract_account_id, delta_amount, absolute_amount from coin_events
where affected_account_id = 'mitchellcm.near'
order by event_index)
select event_index, delta_amount,
sum(delta_amount) OVER (PARTITION BY affected_account_id, contract_account_id ORDER BY event_index) - 1000 AS cum_amt,
absolute_amount from t;
There's no need to review the PR above, but we need to think carefully how should we fill absolute_amount
.
The speed of each block processing should be at least 2 blocks per second on the heaviest possible load.
The main bottlenecks:
I see several options of filling absolute_amount
with the resulting value
absolute_amount
, the other process does UPDATE
on each line to fill in the valuePros:
absolute_amount
values between these points -> less number of RPC callsUPDATE
, we can batch even with different users and contracts if we wantCons:
UPDATE
query is heavy and it will slow down all the processes in the DBVACUUM
will be required much more often because updating the line means deleting + inserting. That will give us tons of deleted queries that should be vacuumedTo sum up, I don't believe in this solution.
absolute_amount
to the temp table, the other process does SELECT
+ INSERT
on each line to fill in the (other) resulting tableSimilar approach, but Pros:
Cons:
absolute_amount
NOT NULL, collect the events, save them somewhere (files? using event streaming?), the other process takes these files/events and perform inserts to the DBPros:
Cons:
I think this solution has the potential, but we need to discuss it.
@frol suggested just to drop absolute_value
column
Pros:
Cons:
We dropped absolute_value column
Starting from September 13th, we can process only 1 block per minute. We need to speed up the solution at least 120 times so that we can catch the latest block at some point of time.
Ideas: