Open Frank-GER opened 4 months ago
Hi. Tuning workers (blockchain+mempool sync) will not help you to increase the user throughput, they are more or less independent on the number of served users as they sync Blockbook to the backend.
The main load of the GetXpubAddress can be split into two parts:
In general, I do not think there is much more to tune, unfortunately. The cache size could help if the disk throughput would be the limiting factor.
Understood. After a certain time response times increase to a point where even a GetSystemInfo can take a few seconds.
There is another point I discovered when testing for a solution: Even after I switch users to a different server, the CPU load on the first server stays at atleast 50%, all from the blockbook service. Only a restart of the service will bring it back to normal (a few %).
@Frank-GER Interesting. Could you please try to identify the issue using profiling?
You can add profiling to Blockbook if you add the flag -prof=127.0.0.1:8335
and restart.
Let it run until it is in the problematic state and them connect to it using go profiler go tool pprof -http=:8336 "http://localhost:8335/debug/pprof/profile?seconds=10"
. Then on the port 8336 there will be a page containing a profiling info including a very good flame graph. The profiling in my example is set up as if you run everything locally, you will have to sort out networking based on your setup.
I usually resolve the networking using ssh tunelling like ssh -L 8335:localhost:8335 <server>
which makes the remote port like if it were open locally on my computer.
Is there a way to tune blockbook/roksdb to better handle a larger amount of users? I tried increasing the dbcache (cache size 2147483648, max open files 16384), but haven't touched workers yet (mempool: starting with 8*2 sync workers). Most used requests (>95%) are GetXpubAddress. Do I need to increase cache and/or workers to better handle that load? Disk throughput isn't critical but hints to holding more data indexes in memcache might improve the situation.