nnaisense / evotorch

Advanced evolutionary computation library built directly on top of PyTorch, created at NNAISENSE.
https://evotorch.ai
Apache License 2.0
997 stars 62 forks source link

[Feature request] Make the is_terminated property manipulable #80

Closed famura closed 1 year ago

famura commented 1 year ago

Hi there,

first off, I'm new to evotorch and must say that I enjoy it a lot so far. The (basic) things I tried work out of the box, the hooks are awesome, and the tutorial makes the entrance hurdle really low,

That being said, I just discovered the is_terminated property of SeachAlgorithm which seems to be False in any case. How about turning this property into a read-only for an internal self._is_terminated that the user could manipulate? My goal is to sneak in a custom stopping criterion via the searchers before_eval_hook. Actually, my stopping criterion is a stateful instance of a custom class. Do you think it would be the right way to sneak this object into the searcher instance via a hook? Moreover, I think that a beginning_of_run_hook (similar to the existing end_of_run_hook) would be a nice addition.

Best wishes, Fabio

engintoklu commented 1 year ago

Hello, Fabio!

Thank you very much for raising this issue! Sorry for my delayed reply.

first off, I'm new to evotorch and must say that I enjoy it a lot so far. The (basic) things I tried work out of the box, the hooks are awesome, and the tutorial makes the entrance hurdle really low,

Very glad to hear these things! Thank you very much for your feedback!

That being said, I just discovered the is_terminated property of SeachAlgorithm which seems to be False in any case.

Indeed, you are right: the is_terminated property of our algorithms are, for now, hard-coded to return False. The main reason for this is that stopping criteria support for our default algorithm implementations is a work-in-progress.

How about turning this property into a read-only for an internal self._is_terminated that the user could manipulate? Moreover, I think that a beginning_of_run_hook (similar to the existing end_of_run_hook) would be a nice addition.

Thank you very much for these suggestions! They sound like very nice features to me. I will post updates here. Until there is progress on them, would you like to consider customizing the is_terminated property via subclassing? For example, if you wish a version of CEM with specialized termination criteria, perhaps you could do this:

from evotorch.algorithms import CEM

class CustomizedCEM(CEM):
    def __init__(self, **kwargs):
        super().__init__(**kwargs)

    def is_terminated(self) -> bool:
        # Termination criteria analysis goes here.
        # If the criteria are dependent on the problem object,
        # you might want to access the problem object from here
        # via: self.problem
        ...

What do you think? Do you think this would work for you?

Please also note that algorithm.run(100) means that 100 generations will be run no matter what. The status of the property is_terminated will NOT be checked. Of course, you can always do this:

while not (my_search_algorithm_instance.is_terminated):
    my_search_algorithm_instance.step()

My goal is to sneak in a custom stopping criterion via the searchers before_eval_hook. Actually, my stopping criterion is a stateful instance of a custom class. Do you think it would be the right way to sneak this object into the searcher instance via a hook?

Using before_eval_hook sounds feasible to me. But, have you considered using after_eval_hook for this? Perhaps, with the help of after_eval_hook, you will be able to notice that the ending criteria are satisfied, and then you will be able to kill your evolution loop before initiating the next generation?

Regards, Engin

famura commented 1 year ago

Hi Egin,

thank you for your extended feedback and don't worry about the reply speed :)

Subclassing the evotorch algorithms would be a viable solution. Due to some particularities and the fact that the run() method is quite slim, I chose a different approach (see the code snipped if interested). I basically replicated every step, sneaking in my convergence criteria.

# ... omitted ...

# Mimic the SearchAlgorithm.run() function from evotorch and include the custom stopping criteria.
searcher.reset_first_step_datetime()
for _ in tqdm.tqdm(range(self.num_iter), desc="Optimizing", unit="iterations", file=sys.stdout, leave=True):
    # Get the solution batch, evaluate it, and update it once.
    searcher.step()

    # Get the very best parameters.
    evo_param = searcher.status["best"].values

    # Stop early if the parameters' magnitude and the cost values do not change significantly.
    param_converged = self.param_conv_crit.is_met(evo_param)
    cost_converged = self.objective_conv_crit.is_met(searcher.status["best_eval"])
    if param_converged and cost_converged:
        break

    if len(searcher.end_of_run_hook) >= 1:
        searcher.end_of_run_hook(dict(searcher.status))

I am fully aware that stopping due to the very best parameter set might be suboptimal, however, it worked well for me so far.

You are right, the after_eval_hook would be a better choice. I can't recall why I wanted to check for convergence before evaluating :)

I'll close this issue, and follow your updates. Thanks.