Closed cysun0226 closed 3 years ago
Hello,
Thank you for submitting this issue, we're aware of this problem. For most of the problems, the current code should be fine.
Here are suggested solutions, 1) normalize your f(x) inside your function. 2) change the leaf size to a larger size. 3) instead of using Kmean, we can learn a linear or non-linear regressor to separate the samples into two groups, rather using K-mean. We will release this feature after NeurIPS of this year.
Thanks for the reply and suggestions! Look forward to your future works & update 🙌
Thank you. Before I will close this issue, can I ask if normalization or increasing the leaf size help you resolve the issue? Thanks.
please note if each elements of x and f(x) in a very small range, [-0.01, 0.01], or entires of [x, f(x)] are very similar this scenario can happen.
Hi,
Recently I am trying to use LA-MCTS to optimize my own task, but some errors occurred during the execution.
My task has 9 dimensions, and the HP setting is:
Cp
= 10 ( follow the suggestion from the paper, ~= 10% of max f(x) )leaf_size
= 10ninits
= 40kernel_type
= "rbf"Here is the error log:
Since it occurs at
Classifier.learn_clusters()
, I print the result ofself.kmean.predict(tmp)
and find some clues:plabel
(the results ofself.kmean.predict(tmp)
) are all the sameI temporarily avoid this exception by making
is_splittable_svm
return False whenplabel
only contains a single class.However, I would like to know that is it possible to happen in the general case? Or it may be caused by my own function?
Thank you for the work & sharing your code!