Closed Yuntian9708 closed 1 year ago
Please, see this issue
P.S. The implementation of the paper is now located here: https://github.com/yandex-research/tabular-dl-revisiting-models
If you need further help, feel free to create a new issue in that repository or continue the discussion in the issue I mentioned
Hi, I am trying to get some visualizations of interpretable results from FT-Transformer, such as feature importance or attention heatmaps. I find some discussions about feature importance in paper section 5.3, but I don't know how to achieve it. Is there a way to achieve these based on the source codes you published? Or can you make an example of implementation? Thank you very much!