Open lidel opened 4 years ago
@lidel Thank you for this quality ticket, I'm supportive. Will do my best to get this don in January.
Thanks @kelson42!! Curious how things are evolving now that we're in the new year - is this still on your agenda this month?
@momack2 I would like, but we are a bit short on C++ resources currently. It has been posponed to Febuary for the moment. If you can recommend someone, please tell us!
I don't know of any C++ devs with bandwidth, but @jnthnvctr might be able to suggest other routes to get this work increased attention. We'd really love to update our distributed wikipedia mirror with snapshots more recent than 2017... ;)
@momack2 It is just a "small" delay and working already to find someone. Maybe you can retweet https://twitter.com/KiwixOffline/status/1214826834417860609
I didn't know about extract_zim
tool. It's nice to see some rust around zim (even if it is not maintained anymore).
I agree with this ticket, zim_tools
is a small set of tools and we can improve it a lot.
Looping articles based on the cluster index order instead of the url order is already used in zimrecreate
tool. It should not be too difficult to reuse it.
At least we would use the libzim cache system and avoid to decompress the same cluster several times.
Heads up for that. extract_zim is a beautiful, lightning fast tool. Took no longer than 1 min to install and run...then extract wikivoyage.zim (800MB) in less than ....guess?
10 sec
Structure? avoiding zimdump's URI encoded file format, instead it 'reads' the URI part and creates a wonderful file structure. For example, directory '-' /j /s favicon style.css
@dignifiedquire - just FYI your past work is getting some ❤️
Just made some updates which fixes some missed files, though I still have to investigate what exactly the diff between the two tools in output is. I also made it a bit faster (on my machine).
$ time ./target/release/extract_zim --skip-link ~/Downloads/wikipedia_en_top_mini_2019-09.zim --out ./out
Extracting file: /Users/dignifiedquire/Downloads/wikipedia_en_top_mini_2019-09.zim to ./out
Creating map
Extracting entries: 243
Spawning 243 tasks across 16 threads
Extraction done in 3268ms
Main page is index
./target/release/extract_zim --skip-link --out ./out 6.37s user 16.20s system 684% cpu 3.296 total
$ command du -sh out
737M out
@mgautierfr Considering that this ticket will need to get the articles in the order they are in the files (to save cluster decompression and that this is as well needed by (at least) zimrecreate
and zimcheck
I think it would be smart to deliver this kind of iterator within the libzim. Please confirm.
Most important part of the improvement will be achieved by implementing https://github.com/openzim/libzim/issues/300
@lidel The zimdump speed has been improved a lot (around 15x) but without adding multithreading. The reason is that we need to revamp our libzim cache strategy to really benefit of it in zimdump. For the moment this is on hold and I will move it out the IPFS project as I believe the speed is acceptable now.
@kelson42 I can confirm, it is now within the same order of magnitude as the rust library, and can be used for practical purposes.
wikipedia_tr_all_maxi_2020-04.zim
(4.4G):
Still slower, but that will change with multithreading.
@lidel Thx for sharing the benchmark.
@mgautierfr @veloman-yunkan @MiguelRocha AFAIK the libzim cache has been improved to allow to get the full potential a multithreading in zimcheck and zimdump in last version 6.2.0. Therefore would that not be a good time t reconsider this ticket?
@kelson I can work on this
@veloman-yunkan Good for me but @MiguelRocha had started something which I believe is available at https://github.com/openzim/zim-tools/tree/speed-up-zimdump. @MiguelRocha Do you rememember what was the code status here?
@kelson42 Yes at that time I created a thread pool to handle the decompression of the clusters in a multi-thread way. Since then zimdump changed quite a bit so its just a matter of re basing the branch with master and fixing potential conflicts.
@MiguelRocha Thank you for the update @veloman-yunkan The same multi-threading approach should be used as well for zimcheck.
@veloman-yunkan I’m unsure about the status here. Could you enlight me please?
@kelson42 I am going to do the zimcheck part over this weekend.
@veloman-yunkan Any news on this front?
@veloman-yunkan Any news on this front?
@kelson42 A prototype implementation was ready, but then the libzim_next
branch was merged and I had to rebase my branch and resolve a lot of large conflicts. I was waiting for any ripple effects from the merged libzim_next
branch to fade out before continuing/finishing the work on this.
@veloman-yunkan I don't think there is any big plan for now to make big changes again. @mgautierfr has to implement https://github.com/openzim/libzim/issues/397 but (beside a few other details), this is going to be the only thing to change before release mid of January. I think you can continue on this ticket.
So, we need that now for the zimdump
binary (it's done for the zimcheck), but this is less urgent. So we can keep that for the future.
zimdump
feels slower than it could be. Below some notes from my tests and ideas how to improve its performance.Single thread? Lack of buffer in front of disk writes?
I have SSD but my disk I/O remains pretty slow (
iotop
shows pretty slow disk writes at<400 K/s
!). Tool seems to be limited by the CPU: a single core is used and is constantly at 100%. Remaining 7 cores remain unused. Looks like it is single-threaded and perhaps flushing after each write to disk?Benchmarks
Unpacking wikipedia_en_top_mini_2019-09.zim (250M) took nearly 30 minutes:
This is super slow comparing to rust-based multicore
extract_zim
from dignifiedquire/zim. It produces some errors and skips some files (tool is not maintained anymore), but is able to extract most of it under 10 seconds(!):Things to try
Applying some/all optimizations from dignifiedquire/zim should make
zimdump
much, much faster: