What is the expected behavior?
Loaded model returns feature importances.
What is motivation or use case for adding/changing the behavior?
feature analysis is always important.
How should this be implemented in your opinion?
I will take a look at the codes, but if it is the same as which in the training process, it will be enough.
Feature request
visualizing feature importance score with pretarined model. for now, following https://www.kaggle.com/code/optimo/tabnet-with-loop-feature-engineering-explained/notebook and https://github.com/dreamquark-ai/tabnet/issues/392 we may can plot feature importance along with training process, however, with model loading using
load_model
I got trouble in getting the feature importance score though I have putX_train
appropriately (the same as in the training process).What is the expected behavior? Loaded model returns feature importances.
What is motivation or use case for adding/changing the behavior? feature analysis is always important.
How should this be implemented in your opinion? I will take a look at the codes, but if it is the same as which in the training process, it will be enough.
Are you willing to work on this yourself? yes