Closed Hazqeel09 closed 1 year ago
Why can't you use TabNetEncoder directly ? https://github.com/dreamquark-ai/tabnet/blob/bcae5f43b89fb2c53a0fe8be7c218a7b91afac96/pytorch_tabnet/tab_network.py#L41
I'm sorry, I'm not quite familiar on how to use that. Been trying for weeks.
Does it mean I can use TabNetEncoder like a neural network layer? So I can combine it into a big layer like the paper?
Yes you can use this as part of a bigger model. Any torch.nn.Module
can be used as build block.
So you'll need your part of your inputs going to the bert model, the other going the TabNetEncoder and the you contenate the outputs to pass them to the 2D convolutional layer.
I see, thank you for your help.
I am trying to do something similar with https://github.com/dreamquark-ai/tabnet/issues/167 where I want to use TabNet as feature extractor. But I noticed that the output dimensions from https://github.com/dreamquark-ai/tabnet/issues/167 is in the size of how many classes for predictions.
I want to duplicate the method of this paper (https://ieeexplore.ieee.org/document/9658729) where TabNet produced embedding of 64 dimensions.