Closed maoueh closed 1 year ago
I see that the priority for graph_out is a lot lower than I would expect (store_pools has priority 304, but graph_out only 286. the second 100-blocks-slice of store_pools has priority 303, then 302... those will ALL run before graph_out even starts!)
If graph_out started at the beginning, it would make more sense !
{"module": "store_pools", "start_block": 12369621, "end_block": 12369720, "priority": 304}
{"module": "store_pool_liquidities", "start_block": 12369621, "end_block": 12369720, "priority": 303}
{"module": "store_native_total_value_locked", "start_block": 12369621, "end_block": 12369720, "priority": 303}
{"module": "store_tokens_whitelist_pools", "start_block": 12369621, "end_block": 12369720, "priority": 304}
{"module": "store_prices", "start_block": 12369621, "end_block": 12369720, "priority": 303}
{"module": "store_eth_prices", "start_block": 12369621, "end_block": 12369720, "priority": 299}
{"module": "store_all_positions", "start_block": 12369621, "end_block": 12369720, "priority": 303}
{"module": "store_total_value_locked", "start_block": 12369621, "end_block": 12369720, "priority": 298}
{"module": "store_total_tx_counts", "start_block": 12369621, "end_block": 12369720, "priority": 303}
{"module": "store_position_changes", "start_block": 12369621, "end_block": 12369720, "priority": 303}
{"module": "store_pool_count", "start_block": 12369621, "end_block": 12369720, "priority": 304}
{"module": "store_totals", "start_block": 12369621, "end_block": 12369720, "priority": 297}
{"module": "store_swaps_volume", "start_block": 12369621, "end_block": 12369720, "priority": 297}
{"module": "store_total_value_locked_by_tokens", "start_block": 12369621, "end_block": 12369720, "priority": 303}
{"module": "store_ticks_liquidities", "start_block": 12369621, "end_block": 12369720, "priority": 303}
{"module": "store_ticks", "start_block": 12369621, "end_block": 12369720, "priority": 303}
{"module": "store_pool_fee_growth_global_x128", "start_block": 12369621, "end_block": 12369720, "priority": 304}
{"module": "store_pool_sqrt_price", "start_block": 12369621, "end_block": 12369720, "priority": 303}
{"module": "graph_out", "start_block": 12369621, "end_block": 12369720, "priority": 286}
It seems forward parallel execution of Uniswap V3 does not stream rapidly data out while it expected that the scheduler should favor chain segment over modules (e.g. that all dependencies should run in range 13M-13.1M instead of favor finishing low-level module from 13M-Head). This is important for
sink
code as they want to receive data as fast as possible while parallel execution does it work.This task is mainly first about exploring the
uniswap-v3
scheduled work plan to determining if there is a problem with it and it the scheduler should be changed to favor vertical completion over horizontal completion. It was suggested that we quickly add a tool tosubstreams
CLI to print the the work plan for a given "request".If the work plan is not good, a new task will be to fix it to optimize vertical vs horizontal.
If the work plan is good, the new task will be to understand why it's so long to receive data when forward parallel executing uniswap v3.