Open ablaom opened 3 years ago
Hi @ablaom, thanks for your comments.
the parameters you "learn" are not ever applied to any data except the data you "learned" from.
Yes, this is correct.
The name suggestion makes sense. I plan on migrating this code into this repo: https://github.com/ababii/Pythia.jl to include along with other algorithms, so when I do, I will change the name.
As I understand it, the smoother is a one-shot transformer in the sense the parameters you "learn" are not ever applied to any data except the data you "learned" from. Yes?
So, if there was some idea to implement the smoother as an MLJ model, then here is my suggestion:
Implement as a
Static
transformer. This means there is noMLJModelInterface.fit
to implement, only anMLJModelInteface.transform
method that will probably combine both the localfit!
andpredict
methods.Scitypes
Small suggestion: "ExponentialSmoother" is probably a better name than "ExponentialSmoothing". One tends to anthropomorphise these things ("transformer" not "transformation", "classifier" not "classification", and so forth).
@vollmersj