It's a pain to resume download with commonsdownloader.py:
1) if the ZIP was already created for the day, it starts downloading again and
in the end overwrites the ZIP (but you can just kill it before it reaches
compression stage);
2) more importantly, if they day wasn't downloaded completely, it deletes the
CSV file and starts downloading everything from scratch:
2a) wget already avoids redownloading the file,
2b) however curl redownloads the XML.
Hence resuming currently takes ages.
Original issue reported on code.google.com by nemow...@gmail.com on 28 Sep 2013 at 7:12
Original issue reported on code.google.com by
nemow...@gmail.com
on 28 Sep 2013 at 7:12