Open toshihikoyanase opened 4 months ago
Hi @toshihikoyanase , could I take up this issue?
Hi @csking101 . Thank you for your interest. Please go ahead!
Hey @toshihikoyanase , so I wanted to share my approach, and get feedback for the same:
Is this the right path?
@csking101 Thank you for sharing the design of the sampler.
I will implement a Sampler from the BaseSampler
Yes, you can use BaseSampler
. Alternatively, you may use SimpleBaseSampler
, which simplifies the implementation. This tutorial provides the usage of SimpleBaseSampler.
I will store the following variables in that - evaluated_points(list), remaining_points(list), best_points(list)
Sounds good. evaluated_points
might not be necessary since Study
holds all evaluated points as Study.trials
.
My search space is a discretized version of space in the respective distribution, so I will take a point and its neighbours will be the all the points offset from the current_point in all the distributions (so for d dimensions, there will be 2*d neighbours). Is this correct for the relative sampling method? So pretty much we move one direction at a time, or should it be considering multiple axes' points at a time?
Please move a single axis at a time because this issue targets straightforward hill climbing. As you mentioned, this approach can involve evaluating a large number of neighbours, it should remain manageable for simple problems.
The rest of the algorithm remains the same, I will consider which neighbour gives the best value as per the objective and make that my current_point, I will repeat this until there is no improvement
Sounds reasonable.
When this happens, I will start a new trial from another random point, with no relation to the previous trials (however, I will store the previous best points from the trials).
Sounds good! Please restart from a randomly chosen point.
Should I give some maximum limit for the number of trials?
There's no need to set a limit within the sampler, as users can specify the maximum number of trials via Study.optimize
.
Thank you for the reply, I will open a PR shortly!
Motivation
Hill climbing is a local search algorithm commonly used for optimization problems. The sampler based on the hill-climbing algorithm aims to provide a simple yet effective way to explore the hyperparameter space and find good solutions.
Description
For simplicity, this hill-climbing sampler will only support the discrete case, where hyperparameters are represented as integers or categorical values.
The key components of the hill-climbing sampler will include:
Alternatives (optional)
No response
Additional context (optional)
No response