Closed albertpod closed 1 year ago
@bvdmitri, perhaps that requires a separate issue, but I wonder if we'd like to support the user's "interruptions" such that the inference dumps/saves the current results but continues. I imagine that the inference will be running all the time for the actual agent, so supporting this "dump" might be a thing.
@albertpod It should not be an issue, because the rxinference
function (the one that is supposed to run for actual agents) exposes marginals as streams of data so you can subscribe/unsubscribe while inference continues. If a user really really wants to save some intermediate results in the inference
function it is also possible with the callbacks
keyword argument.
I open this issue as a starting milestone toward "robustness."
At the moment, if an error occurs during inference, then
RxInfer
does not return you the intermediate result which was acquired before the error occurred. The error can be caused by different things: numerical instabilities, inadequate data stream, user interruption during inference, etc.As a suggestion, we may want to add an optional argument to the inference function:
inference(..., allow_failure = true, )