dreamquark-ai / tabnet

PyTorch implementation of TabNet paper : https://arxiv.org/pdf/1908.07442.pdf
https://dreamquark-ai.github.io/tabnet/
MIT License
2.56k stars 473 forks source link

eval_metric is missing in Pretrainer #419

Closed sooeun67 closed 1 year ago

sooeun67 commented 2 years ago

Describe the bug eval_metric is not available in TabNetPretrainer.fit even though it is described as one of parameters -- seems like you forgot eval_metric to be included in parameters

What is the current behavior? got an error TypeError: fit() got an unexpected keyword argument 'eval_metric'

If the current behavior is a bug, please provide the steps to reproduce.

Expected behavior

Screenshots

Other relevant information: poetry version:
python version: Operating System: Additional tools:

Additional context

스크린샷 2022-07-12 오후 4 29 53
Optimox commented 2 years ago

What kind of evaluation metric would you like to use during pretraining?

sooeun67 commented 2 years ago

Hey, thanks for the prompt reply! I was hoping to have default metrics (such as logloss, auc for classifier) available during pretraining as well as custom metric as documented

Optimox commented 2 years ago

The pretrainer is an unsupervised training, there is no target to predict, the model is trying to predict missing inputs it received. So both logloss or AUC does not make any sense.

Custom metric could be used but I think it's would be a minor improvement. What kind of unsupervised metric would you use ?

sooeun67 commented 2 years ago

ah my bad -- you're right. I am not sure if using one of the traditional metrics of unsupervised learning, such as kmeans clustering, rand index, etc, is a proper way to evaluate pretraining either.

What about having a module that compares/visualizes reconstructed x with original x ? My intention was to confirm the performance of pretrainer.

Optimox commented 2 years ago

It's an interesting topic, at the moment you can expect to have a 'working' reconstruction as long as the pretraining loss is bellow 1.

Feel free to propose some other methods in a PR to discus more concretely