Open ColdTeapot273K opened 2 months ago
Hey there @ColdTeapot273K!
Would you say this is a bug? The idea here is that there is no need to keep track of values when drop_zeros=True
. Indeed, if drop_zeros=False
, then we need to keep track of values in order to add zeros. But if not, there is no need to keep track, which is why we return directly in learn_one
and learn_many
.
What am I missing? Maybe it would be preferable if oh.values
was a private property?
@MaxHalford sorry for the delay
I got the idea, but let me argue that it's a hack.
Let me argue that learning implies some "learnt state", as per principle of least astonishment. Workflows for debug, integration with other libs, serialization, etc. of learnable transformers/estimators rely on this behavior.
Personally I had quite some cases where it was at least handy (sometimes - necessary) to inspect such a state to verify the behaviour, especially during productionisation of pipelines, converting to other frameworks, developing extensions, etc.
The current implementation is a logic shortcut which saves on some space (& maybe time) complexity. At the cost of "correctness", as in, making learning logic inconsistent and disabling workflows that depend on inspecting a learnt state.
E.g. now I have to make custom patches for river
fork or give up on performance gains from sparsity or stay on old river
releases. I think this trade was not worth it.
Proposal: the previous implementation (which had the learnt state) was just fine and should be the default. The current shortcut implementation can be re-implemented by advanced users who are onto optimizing library code for some specific use case.
Versions
river version: recent main (online-ml/river@d606d7b4b70e1867e601d77c645880eadb3ae472) Python version: Python 3.11.8 Operating system: Fedora Linux
Describe the bug
These 4 interesting lines effectively stop
OneHotEncoder
from learning whendrop_zeros=False
:Steps/code to reproduce
Setup:
Actual result:
Expected result: