Hi!
First, I want to thank you for the great paper and repo, your work is highly appreciated!
I think I've found a bug in the implementation of the MLP in src/tabular/bin/mlp.py
In line 46: d_layers.append(d_out)
In line 55: self.head = nn.Linear(d_layers[-1] if d_layers else d_in, d_out)
The outcome is that self.head will always be a linear map from dim d_out to dim d_out.
This should be fixed by removing line 46.
Hi! First, I want to thank you for the great paper and repo, your work is highly appreciated! I think I've found a bug in the implementation of the MLP in src/tabular/bin/mlp.py
In line 46:
d_layers.append(d_out)
In line 55:
self.head = nn.Linear(d_layers[-1] if d_layers else d_in, d_out)
The outcome is that
self.head
will always be a linear map from dimd_out
to dimd_out
. This should be fixed by removing line 46.I hope this helps!