Closed SvenGroen closed 1 year ago
in: https://github.com/rotot0/tab-ddpm/blob/5ac62c686ab177afcf7ae97492e15ac99984a14a/scripts/tune_evaluation_model.py#L121
train_func is train_catboost which returns a MetricsReport object. This object cannot be unpacked into the 4 variables (same in this line)
train_catboost
MetricsReport
I assume _m1 & _m2 refer to "Metrics 1" and "Metrics 2" for validation and testing set.
Since only val_m2 is used i would suggest to use:
score = train_func( (...) ).get_metric(split="val", metric="[acc|f1|roc_auc]")
What metric do you think is best suited for tuning the Catboost model / what metric is val_m2 supposed to be?
Hi, sorry about this bug. My recent commit should fix the problem. We use macro-f1 for classification and r2 for regression tasks, btw. And _m1 and _m2 are accuracy and f1 for classification, rmse and r2 for regression.
_m1
_m2
in: https://github.com/rotot0/tab-ddpm/blob/5ac62c686ab177afcf7ae97492e15ac99984a14a/scripts/tune_evaluation_model.py#L121
train_func is
train_catboost
which returns aMetricsReport
object. This object cannot be unpacked into the 4 variables (same in this line)I assume _m1 & _m2 refer to "Metrics 1" and "Metrics 2" for validation and testing set.
Since only val_m2 is used i would suggest to use:
score = train_func( (...) ).get_metric(split="val", metric="[acc|f1|roc_auc]")
What metric do you think is best suited for tuning the Catboost model / what metric is val_m2 supposed to be?