Open Naiglos opened 1 year ago
Hi, thanks for the report!
Recent dbsync (13.1+) dropped some indices that affect our queries - you can (re)create these manually based on https://github.com/input-output-hk/cardano-db-sync/issues/1349#issuecomment-1416017488 and it should fix the sequential scan affecting this query.
Hello, thank you for your suggestion. I created indices mentioned in linked comment. Somehow, it got worse. Now it can hover between 47 and 189 seconds.
Once again i executed query directly with EXPLAIN:
in the meantime here some steps i took but without success:
VACUUM ANALYZE
and VACUUM
VACUUM FULL
and REINDEX
on the tables involved in query. This is almost certainly due to low resources for the database. The 64GB is the bare minimum, try to increase it and play with the postgresql settings, that query is on a pretty large dataset and is probably doing the best it can.
That being said, if you find a way to optimize it, a PR is most certainly welcome!
Hello, I actually did some additional testing. @mmahut you where absolutely right about resource constrain. It turns out asset used for testing indeed contains abnormally large amount addresses compared to other assets. With other assets performance has increased dramatically, from 7sec to ~130ms.
Can I suggest adding those indices to README instructions? It could be a life-saver for some. And about resource estimation it could come in useful to have recommanded hardware spec listed per component .
Thank you for your time.
Would it be possible to automate creation of these indexes on the Blockfrost side?
Would it be possible to automate creation of these indexes on the Blockfrost side?
Yes, that's a good idea. We should add it to blockfrost-bootstrap.
Problem Description
Querying /assets/{asset}/addresses endpoint response time ranges form 7 to 35 seconds.
Environment
Hardware
Software
Applications version
Applications specific settings
Troubleshooting steps taken
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT JSON)
Can be viewed here