Open impredicative opened 1 day ago
Sure, at the moment it uses a bayesian smoother by default (RTS smoother) which inherently uses all data, we could of course just use a filtering based approach where each point estimate is only reliant on the data previous to it. To do this we would just use:
# Run a bayesian filter
filter = BayesianFilter(transition_model, initial_state)
filter_states, filter_times = filter.run(observations, t_array, 1.0/delta_t, use_jacobian=True)
return filter_states, filter_times
rather than:
# Run a bayesian filter
filter = BayesianFilter(transition_model, initial_state)
filter_states, filter_times = filter.run(observations, t_array, 1.0/delta_t, use_jacobian=True)
smoother = RTS(filter)
smoother_states = smoother.apply(filter_states, filter_times, use_jacobian=False)
return smoother_states, filter_times
(Basically just don't run the smoother on the filter outputs)
I guess in the api for kalmangrad.grad
we could just add a flag for filter or smoother if there was demand for that. It would mostly be useful for offline testing of online techniques :)
I look forward to this being usable from kalmangrad.grad
.
I don't mind if there is some special smoother that works only with historical values. With my simplistic understanding of the API, I see a possible need for these flags:
online: bool = False # If True, it should completely remove future dependence.
smoother: Optional[str] = 'RTS' # otherwise None. Could alternatively be `Optional[Enum]` which is more professional.
A related point is that currently, there is no output data for the last input, but it would help for this to also exist if possible. Without this, I have to shift the outputs by 1 for them to align, and this shifting seems less than ideal. To me the last input is the most important input.
Currently, kalmangrad.grad produces outputs for each input timestamp that depend on subsequent input values. I suppose this could be natural for it. Is it however possible to produce outputs without this dependence?
To test this, let's say I take the latter half of the input values away. The output of kalmangrad.grad for the remaining first half of the inputs will now be very different (compared to when all input values were kept).