Closed M-R-T-U-D closed 1 year ago
thanks @M-R-T-U-D,
It looks like I forgot those 2 parameters. Do you think that the above PR would solve the problem ?
Hi Optimox, yes that should fix the problem. Thanks for fixing the bug 👍since now the shared and indep layers in decoder will change if passed via TabNetPretrainer
@M-R-T-U-D I did a test with the bugfix and the number of parameters does change if add more decoder layer.
I'll try to make a release soon, in the meantime you can use the develop branch.
Out of curiosity have you been able to play with the attention groups ?
Not for now. Do you suspect that there is a bug with that also?
No it's just that you seem to be using the library quite heavily so I would be happy to get a feedback about this, since it's not in the original paper.
Not planning to use it anytime soon, but I will let you know if I do use it.
Describe the bug
See title
What is the current behavior?
If the current behavior is a bug, please provide the steps to reproduce.
TabNetPretrainer
instance, e.g.:summary()
fromtorchinfo
library to show total parameter of the network:Total params: 406,684
n_shared_decoder
orn_indep_decoder
to e.g. 44 and you would see the same total params as in step 3. So changing the said params does not effect the size?and
Expected behavior
The size of tabnet should change since independent and shared layers in decoder changes. Which does not happen since the both params are not being passed via the class
TabNetPretrainer
toTabNetPretraining
instance.Screenshots
Other relevant information: poetry version: - python version: 3.8 Operating System: Linux Additional tools: torchinfo
Additional context