Open michalrus opened 3 weeks ago
Hi @michalrus, thanks for creating the issue.
I guess this is something that can be done file by file as we compute the digest of Cardano database immutable files one by one with the CardanoImmutableDigester
:+1:
Why
Considering that the digest currently takes over 10 minutes to compute on Mainnet, it would be better UX if we could show the progress to the user.
What
I see in this code fragment that no progress is being updated.
This can be tricky because I'm assuming that the digest is recursive. But we could check the whole unpacked directory size first? Maybe even in parallel, delaying the first progress update to not lose unpacking time? Or even just progress in terms of the number of files processed would work well, the speed wouldn't be constant, but still.
(original Slack thread)