Perhaps it makes sense for the purpose of this ticket to fix an experiment, such as this one
If you trace in you'll see how it is generating surrogate training data. This could be improved or blown away I'm not sure. But maybe we first just ensure that the normalization is consistent here and where it is used in a successor skater
Some possibilities:
The training and prediction data should not overlap at all. Currently they do.
Different scaling of the input data to be more consistent
Different data augmentation is required
Bug in interpolation in k ?
Just not enough training data
Do we backtrack to something even simpler like a fast moving average? Maybe!
Perhaps it makes sense for the purpose of this ticket to fix an experiment, such as this one
If you trace in you'll see how it is generating surrogate training data. This could be improved or blown away I'm not sure. But maybe we first just ensure that the normalization is consistent here and where it is used in a successor skater
Some possibilities:
Do we backtrack to something even simpler like a fast moving average? Maybe!