Closed sooeun67 closed 2 years ago
What kind of evaluation metric would you like to use during pretraining?
Hey, thanks for the prompt reply!
I was hoping to have default metrics (such as logloss
, auc
for classifier) available during pretraining as well as custom metric as documented
The pretrainer is an unsupervised training, there is no target to predict, the model is trying to predict missing inputs it received. So both logloss or AUC does not make any sense.
Custom metric could be used but I think it's would be a minor improvement. What kind of unsupervised metric would you use ?
ah my bad -- you're right. I am not sure if using one of the traditional metrics of unsupervised learning, such as kmeans clustering, rand index, etc, is a proper way to evaluate pretraining either.
What about having a module that compares/visualizes reconstructed x with original x ? My intention was to confirm the performance of pretrainer.
It's an interesting topic, at the moment you can expect to have a 'working' reconstruction as long as the pretraining loss is bellow 1.
Feel free to propose some other methods in a PR to discus more concretely
Describe the bug
eval_metric
is not available in TabNetPretrainer.fit even though it is described as one of parameters -- seems like you forgoteval_metric
to be included in parametersWhat is the current behavior? got an error
TypeError: fit() got an unexpected keyword argument 'eval_metric'
If the current behavior is a bug, please provide the steps to reproduce.
Expected behavior
Screenshots
Other relevant information: poetry version:
python version: Operating System: Additional tools:
Additional context