Closed benmccann closed 11 years ago
This might be best with a --fail-fast flag and/or a config option that will exit with an error status code... Or that could be flipped so that is the default and introduce a --catch-errors flag/option.
I guess it is nice that it backs up what it can. But it would be cool to change it so that after the backup completes it exits with the appropriate status instead of always exiting successfully.
Some of this is horrible programming on my part. I shouldn't be doing try/except with not checking the exception type. I've already started on some of this in a local branch. Due to it using multiprocessing, bubbling up warnings is "difficult".
I like the idea of printing a summary at the end. Python's multiprocess pool makes it pretty easy to get return values out of finished processes through some pickling magic.
from multiprocessing import Pool
def f(x):
return x*x
p = Pool(5)
print p.map(f, range(10))
I used that originally in earlier development, but for large file sets it used too much memory. Though, revisiting this it appears that Pool would work, but not Pool.map(). I'll try to again with apply_async() which returns a http://docs.python.org/2/library/multiprocessing.html#multiprocessing.pool.AsyncResult which I can check for uncaught exceptions.
I'm calling this backup script from another process. I want to know that the process failed if there was some error, which means I want it to exit with exit code 1 instead of exit code 0. The code is littered with this pattern:
Can we get rid of all the exception catching? Or can we log an error and then call sys.exit(1)?