ukwa / ukwa-manage

Shepherding our web archives from crawl to access.
Apache License 2.0
10 stars 5 forks source link

Make one-off index jobs use batching for large tasks #96

Open anjackson opened 2 years ago

anjackson commented 2 years ago

Things like one-off CDX/Solr indexing jobs work okay, but if it's necessary to index a large amount of content, they will fail. If a large input is passed in, the code should break the input into batched of e.g. 1000 input files (batch size as CLI option). The code should then run each batch in turn....

Hmm, the issue here is that this is quite brittle, as if there's a failure, you'll have to go back to the start. The code could simply refuse to process more than 1000 inputs, and force the script user to use split to break up the task.

Additionally, the tools could use a convention of writing the summary results out to <input-file>.out.jsonl and not run if that is present. That would make rerunning a set of batch jobs pretty easy to manage.

These two ideas could be brought together - if the <input-file> is large, the script generates splits like <input-file>.split_1, and stores completion in <input-file>.split_1.out.jsonl.

Or, perhaps more simply, generate a <input-file>.dbm (using the dbm built-in module) and use that to keep track? Or maybe just even a .jsonl that gets replaced after each batch.