Closed rishiraj closed 2 years ago
Hi, thank you for flagging this issue. Could you please tell us which arguments you're using for the hyperparameter tuner or, even better, come up with a repro in the form of a colab?
Just the exact same arguments and codes that are shown in the official tutorial colab. Only difference is I used it for regression instead of classification with task=tfdf.keras.Task.REGRESSION
tuner = tfdf.tuner.RandomSearch(num_trials=50)
tuner.choice("min_examples", [2, 5, 7, 10])
tuner.choice("categorical_algorithm", ["CART", "RANDOM"])
local_search_space = tuner.choice("growing_strategy", ["LOCAL"])
local_search_space.choice("max_depth", [3, 4, 5, 6, 8])
global_search_space = tuner.choice("growing_strategy", ["BEST_FIRST_GLOBAL"], merge=True)
global_search_space.choice("max_num_nodes", [16, 32, 64, 128, 256])
# The argument that I think is throwing the error
tuner.choice("use_hessian_gain", [True, False])
tuner.choice("shrinkage", [0.02, 0.05, 0.10, 0.15])
tuner.choice("num_candidate_attributes_ratio", [0.2, 0.5, 0.9, 1.0])
I can confirm that commenting out tuner.choice("use_hessian_gain", [True, False])
doesn’t throw this error anymore.
Great! Indeed, setting tuner.choice("use_hessian_gain", [True, False])
explicitly instructs the tuner to try out both True
and False
for the setting use_hessian_gain
, despite hessian gain being unavailable for MSE.
Hyper-parameter tuning by specifying the tuner constructor argument of the model currently only works for classification tasks. For regression tasks, where the default loss function is MSE or RMSE, it breaks. At the end of the tuning trials, there is a hessian optimization phase. However Hessian learning is disabled for GBT regression with MSE loss. Hence the following error is shown: