facebookresearch / cc_net

Tools to download and cleanup Common Crawl data
MIT License
972 stars 142 forks source link

cc_net/tools/dl_cc_100.py fails to extract complete dataset #25

Open leezu opened 3 years ago

leezu commented 3 years ago

python3.7 cc_net/tools/dl_cc_100.py --outdir data/cc100 --processes 96 provides only 99GB (277 GB uncompressed) data across 10 languages:

780M    /mnt/data/cc100/bn_IN
2.0G    /mnt/data/cc100/hi_IN
25G     /mnt/data/cc100/id_ID
12G     /mnt/data/cc100/ko_KR
89M     /mnt/data/cc100/my_MM
25G     /mnt/data/cc100/sv_SE
270M    /mnt/data/cc100/sw_KE
6.7G    /mnt/data/cc100/th_TH
475M    /mnt/data/cc100/tl_XX
21G     /mnt/data/cc100/vi_VN

The script should provide all 100 languages listed in https://arxiv.org/pdf/1911.02116.pdf Figure 1:

image

leezu commented 3 years ago

Fortunately the dataset is also available at http://data.statmt.org/cc-100/ Nevertheless, the code in the current repo should be fixed and ideally a link to http://data.statmt.org/cc-100/ should be included in the README. Thanks

wangyong1122 commented 3 years ago

@leezu Hi, thank you very much for providing this website. I found that this website's download speed is slow and I also cannot download multiple files simultaneously. How do you solve this problem? Thanks.

leezu commented 3 years ago

@wangyong1122 in principle, you could try avoid the IP-based throttling of statmt.org by using a multiple machines with different IP addresses at the same time.

wangyong1122 commented 3 years ago

@leezu I see. Thank you very much.

izaskr commented 3 years ago

@wangyong1122 in principle, you could try avoid the IP-based throttling of statmt.org by using a multiple machines with different IP addresses at the same time.

Does this give the same data as downloading it from this repo (after specifying the desired language(s), deduplication etc.)? By "the same" I mean the same format and formatting. I would compare the two myself, but I cannot use this repo to download on a remote server.

zhangfanTJU commented 3 years ago

@gwenzek I also encountered the same problem, do you have any plans to update the code? thanks!