dreamquark-ai / tabnet

PyTorch implementation of TabNet paper : https://arxiv.org/pdf/1908.07442.pdf
https://dreamquark-ai.github.io/tabnet/
MIT License
2.55k stars 469 forks source link

Research : Boosted-TabNet? #124

Open Optimox opened 4 years ago

Optimox commented 4 years ago

Main Remark

Tabnet architecture is using sequential steps in order to mimic some kind of random forest paradigm. But since boosting algorithms often outperform random forests shouldn't we try to move towards boosting methods instead of random forest?

Proposed Solutions

One solution I see here would be to predict different things at each step of the tabnet to perform boosting:

This looks like it could work quite easily for regression problems but I'm not sure how it could work for classification tasks, you can't stay in the classification paradigm and try to predict residuals. If anyone knows about a specific loss function that would make that happen I think it's worth a try!

If you feel like this is interesting and would like to contribute, please share your ideas in comments or open a PR!

AlexisMignon commented 4 years ago

Why not try a mere application of gradient boosting ? Each step fits the gradient of the loss function (as computed so far) and adds it (using line search) to the previous result. Only regression is needed internally (to fit the gradient) and it allows for regression and classification.

rasenganai commented 4 years ago

Interesting , For classification , i think we can try the same way as gradient boosting algorithm and or adaboost as mentioned in ther paper using cross_entropy loss function ?

In case of Gradient Boostin technique the output of each step will be multiplied by a Learning rate and will be sum to get the log odd on which we can apply sigmoid to get probability (0/1)?

In case of adaboost we can maybe use the same weightage formula as mentioned in the paper.

interesting would be to some how use MASK weights to give "IMPORTANCE WEIGHT" to each step to contribute to the final prediction as MASK heatmap shows us that some MASK weights are not that activated as others , It may improve decision making.

Different would be the training as in case of boosted algorithms they trin one tree then use it in boosting but here all the weak learners would be learning simultaneously.

I would like to do some research and contribution to this.

Abhishek-eBook

Optimox commented 4 years ago

@AlexisMignon approaching classification problems with regression could be a solution but I feel like it's not satisfying and especially for multi class classification...

@JaskaranSingh-Precily tabnet is using cross entropy already, but you need to have integers as targets to apply cross entropy, so I don't see how a boosted version could use cross entropy at every step. Could you explain and/or give some links to literature? I probably just need to dig a bit deeper on how XGBoost deals with multi class classification.

@Jaskaran170599 Not sure you'll double your chance of winning Abhishek's ebook that way to be honnest! :)

rasenganai commented 4 years ago

@Optimox actually commented with the company account that was not my personal account

rasenganai commented 4 years ago

@Optimox I think the problem here is how to train the weak learners .

As in boosted trees this is done by gini index (for training a weak tree) etc. And the cross entropy was used on whole algorithm that is to find out residuals and another tree is trained on those residuals.

But here each step requires some gradients to train not as in tree (gini index only).

A solution could be to train each step on cross-entropy (for 1 or 2 epochs or gradient steps (weak learners)) predicitng classes probab and using that probab calculating the residuals for the next step to train using cross entropy in a same way and so on ?

AlexisMignon commented 4 years ago

@Optimox You may want to have a look at the Friedman paper about Gradient Boosting : https://statweb.stanford.edu/~jhf/ftp/trebst.pdf You'll see what I meant by using regressors only.

@Jaskaran170599

In case of Gradient Boostin technique the output of each step will be multiplied by a Learning rate and will be sum to get the log odd on which we can apply sigmoid to get probability (0/1)?

Exactly. The idea is to fit the decision function before the sigmoid is applied. And compute the gradient with respect to this decision function values. So at each step, the weak learner is trained to fit the gradient (hence the regressor), the result is added (with a weight) to the the previous decision function. And class probability can be computed by applying the sigmoid function for binary problems or softmax for multi-class problems.

rasenganai commented 4 years ago

@AlexisMignon Yeah and i think here in tabnet case that weak learner is one block of the architecture and the main task that is different than Boosting algos is to train that block .

bibhabasumohapatra commented 2 years ago

https://github.com/tusharsarkar3/XBNet

Optimox commented 2 years ago

Thanks @bibhabasumohapatra, looks promising. Is there a research paper related to the repo?

bibhabasumohapatra commented 2 years ago

Thanks @bibhabasumohapatra, looks promising. Is there a research paper related to the repo?

Yes.

bibhabasumohapatra commented 2 years ago

Thanks @bibhabasumohapatra, looks promising. Is there a research paper related to the repo?

Yes. https://arxiv.org/abs/2106.05239

ShuyangenFrance commented 2 years ago

https://github.com/tusharsarkar3/XBNet

This is a good job, but rather a completely design from my point of view.