nnaisense / evotorch

Advanced evolutionary computation library built directly on top of PyTorch, created at NNAISENSE.
https://evotorch.ai
Apache License 2.0
997 stars 62 forks source link

Example running problem. #81

Closed lk1983823 closed 11 months ago

lk1983823 commented 1 year ago

When I run the A black-box optimization example, it always shows information as :

[2023-07-10 16:16:21] INFO     <29632> evotorch.core: Instance of `Problem` (id:139776352151312) -- The `dtype` for the problem's decision variables is set as torch.float32
[2023-07-10 16:16:21] INFO     <29632> evotorch.core: Instance of `Problem` (id:139776352151312) -- `eval_dtype` (the dtype of the fitnesses and evaluation data) is set as torch.float32
[2023-07-10 16:16:21] INFO     <29632> evotorch.core: Instance of `Problem` (id:139776352151312) -- The `device` of the problem is set as cpu
[2023-07-10 16:16:21] INFO     <29632> evotorch.core: Instance of `Problem` (id:139776352151312) -- The number of actors that will be allocated for parallelized evaluation is 0

Are there any parameters not properly set? How can I remove this? Thanks.

engintoklu commented 1 year ago

Hello @lk1983823!

Thank you for trying out EvoTorch!

For disabling these INFO messages, please try adding these lines at the top of your script/notebook:

from evotorch.tools import set_default_logger_config
import logging

set_default_logger_config(
    logger_level=logging.WARNING,  # only print the message if it is at least a WARNING
    override=True,  # override the previous logging settings of EvoTorch
)

Does this work for you?

Happy coding! Engin

lk1983823 commented 1 year ago

@engintoklu Thanks. The former message is blocked, but it shows another warning:

/home/lk/anaconda3/envs/dpc/bin/python /media/lk/lksgcc/lk_git/3_Reinforcement_Learning/3_4_MPC/evotorch/examples/scripts/simple_exmaple.py
/home/lk/anaconda3/envs/dpc/lib/python3.10/site-packages/torch/_tensor.py:1295: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  ret = func(*args, **kwargs)
/home/lk/anaconda3/envs/dpc/lib/python3.10/site-packages/evotorch/tools/readonlytensor.py:99: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  self_ptr = self.storage().data_ptr()
/home/lk/anaconda3/envs/dpc/lib/python3.10/site-packages/evotorch/tools/readonlytensor.py:100: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  other_ptr = other.storage().data_ptr()
/home/lk/anaconda3/envs/dpc/lib/python3.10/site-packages/evotorch/core.py:3425: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  shares_storage = self._data.storage().data_ptr() == source._data.storage().data_ptr()

My torch version is 2.0.0+cu117. Python: 3.10. But I block the warnings using

import warnings
warnings.filterwarnings('ignore')
engintoklu commented 1 year ago

Hello @lk1983823!

Thanks for the feedback, and thanks for sharing the filterwarnings trick!

The function evotorch.tools.set_default_logger_config(...) can be used to show/hide messages generated by EvoTorch, but this other warning you mentioned above is actually generated by PyTorch, so evotorch.tools.set_default_logger_config(...) has no control over it.

This warning message of PyTorch is informing us that, beginning with PyTorch 2.0, the method Tensor.storage().data_ptr() is deprecated. data_ptr is a method that the current version of EvoTorch depends on, and therefore, this warning message gets triggered. The good news is that there is a branch of EvoTorch where we are using the up-to-date counterpart of data_ptr and this warning message is not triggered anymore. Hopefully, with the new version of EvoTorch, you will not see this warning again.

Thank you for your feedback, and happy coding!

adoepp commented 1 year ago

May I use the same issue to enquire about another example running problem? I was just trying out the examples from the website on a fresh installation of EvoTorch. The "GPU acceleration" part worked fine, but when trying to run the "Multiple Objectives" example I get the following error message:

 ... in SearchAlgorithm.run(self, num_generations, reset_first_step_datetime)
    422     self.reset_first_step_datetime()
    424 for _ in range(int(num_generations)):
--> 425     self.step()
    427 if len(self._end_of_run_hook) >= 1:
    428     self._end_of_run_hook(dict(self.status))

... in SearchAlgorithm.step(self)
    387 if self._first_step_datetime is None:
    388     self._first_step_datetime = datetime.now()
--> 390 self._step()
    391 self._steps_count += 1
    392 self.update_status({"iter": self._steps_count})

... in SteadyStateGA._step(self)
    859     if len(self._operators) == 0:
    860         raise RuntimeError(
    861             f"This {type(self).__name__} instance does not know how to proceed, "
...
   4007         )
   4009 if ranking_method is None:
   4010     result = evdata * self._get_objective_sign(obj_index)

   ValueError: Cannot compute the utility values, because there are solutions which are not evaluated yet.

I am just starting to look into the package and I am struggling to trace back where this error originates from. Any help would be appreciated to get a basic example of multi objective optimization to work ...

engintoklu commented 1 year ago

Hello @adoepp ! Thank you very much for your feedback! There is now a pull request addressing the error you mentioned: #87