Open dmfigol opened 6 years ago
Not so sure about this, IMO this is much better solved by python:
def my_task_with_retries(task, retry=0):
try:
r = task.run(...)
except MyException:
if retry < MAX_RETRIES:
r = my_task_with_retries(task, retry+1)
else:
raise
except SomeOtherException:
r = some_other_task(task)
finally:
cleaning_operations(task)
return r
With a decorator this level of flexibility is hard to achieve and ends up polluting the core (which I think it should be as simple and lean as possible to minimize the chances of bugs).
Also, note that a grouped_task
is already a sort of decorator where you can wrap any task plugin and implement your own workflow.
This is similar to the discussion we had (offline, sorry about that) here: #3
@dbarrosop currently library exception is not raised, only NornirExecutionError
. This prevents a user from doing what you showed above. One would need to use failed_hosts
and result.exception
to make something like this work.
We could easily reraise the original exception in the subtask case. Either that or just something like:
try:
...
except NornirSubTaskError as e:
if isinstance(e, AnException):
...
elif isintance(e, SomeOtherExeption):
...
Could you give the whole example with let's say some Netmiko exception or Napalm?
Task queues like celery, have a built-in decorator to handle retry for specific exceptions: http://docs.celeryproject.org/en/latest/userguide/tasks.html#retrying I think we could implement something similar, where you could specify a list of exceptions for the task for which you want to retry and how many times you want to retry.