Closed Leviathan321 closed 11 months ago
Is the only change to store some additional data?
If yes, you can store attributes direclty in _evaluate
to the out
dictionary, e.g.
import numpy as np
from pymoo.core.problem import ElementwiseProblem
class MyProblem(ElementwiseProblem):
def __init__(self):
super().__init__(n_var=2,
n_obj=2,
n_ieq_constr=2,
xl=np.array([-2,-2]),
xu=np.array([2,2]))
def _evaluate(self, x, out, *args, **kwargs):
f1 = 100 * (x[0]**2 + x[1]**2)
f2 = (x[0]-1)**2 + x[1]**2
val = x[0] + x[1]
g1 = 2*(x[0]-0.1) * (x[0]-0.9) / 0.18
g2 = - 20*(x[0]-0.4) * (x[0]-0.6) / 4.8
out["F"] = [f1, f2]
out["G"] = [g1, g2]
out["val"] = val
Then each individual has the attribute val
stored. Let me know if this resolves your issue.
Thank you, yes I have already tried this. It works for me only for storing boolean or numerical values. Maybe strings also?!. But I want to store objects, as I have a lot of simulation parameters in a simulation output. Is it possible to store objects? Of course I could try to store it as strings and then create objects out of strings, but this is not really convenient.
It should also work for dictionaries. Checkout the following code:
import numpy as np
from pymoo.algorithms.moo.nsga2 import NSGA2
from pymoo.core.problem import ElementwiseProblem
from pymoo.optimize import minimize
class MyProblem(ElementwiseProblem):
def __init__(self):
super().__init__(n_var=2,
n_obj=2,
n_ieq_constr=2,
xl=np.array([-2,-2]),
xu=np.array([2,2]))
def _evaluate(self, x, out, *args, **kwargs):
f1 = 100 * (x[0]**2 + x[1]**2)
f2 = (x[0]-1)**2 + x[1]**2
g1 = 2*(x[0]-0.1) * (x[0]-0.9) / 0.18
g2 = - 20*(x[0]-0.4) * (x[0]-0.6) / 4.8
out["F"] = [f1, f2]
out["G"] = [g1, g2]
out["info"] = dict(x1=x[0], x2=x[1])
problem = MyProblem()
algorithm = NSGA2()
res = minimize(problem,
algorithm,
("n_gen", 5),
seed=1,
verbose=True)
print(res.pop.get("info"))
Does this also work for your use case?
No it does not work. When I execute your code, I get the following output on my machine:
$ python -m test
==========================================================================================
n_gen | n_eval | n_nds | cv_min | cv_avg | eps | indicator
==========================================================================================
1 | 100 | 2 | 0.000000E+00 | 2.116332E+01 | - | -
2 | 200 | 6 | 0.000000E+00 | 1.2937658272 | 0.4461140785 | ideal
3 | 300 | 6 | 0.000000E+00 | 0.000000E+00 | 0.0658171959 | f
4 | 400 | 11 | 0.000000E+00 | 0.000000E+00 | 0.0702376147 | ideal
5 | 500 | 16 | 0.000000E+00 | 0.000000E+00 | 0.0336379946 | ideal
[None None None None None None None None None None None None None None
None None None None None None None None None None None None None None
None None None None None None None None None None None None None None
None None None None None None None None None None None None None None
None None None None None None None None None None None None None None
None None None None None None None None None None None None None None
None None None None None None None None None None None None None None
None None]
Interesting. When I run the code I am getting
[array({'x1': 0.6551785808791553, 'x2': 0.05955644823323425}, dtype=object)
array({'x1': 0.6705513823743234, 'x2': 0.08670655858908177}, dtype=object)
array({'x1': 0.6843799638819847, 'x2': 0.05955644823323425}, dtype=object)
...
]
Have you found what your issue was? Can you try a different Python and/or numpy version and see if you get the same output?
So I have tried for now the latest Pymoo version with both Python3.10 and Python3.8 it failed.
See here: https://colab.research.google.com/drive/1e-k8pTzDM0jq0PvbF4a4HYPRW4RVD6lT?usp=sharing
Can you check if this issue persists in pymoo 0.6.1?
I just ran your notebook and I think it should be fixed.
I am pretty confident it is fixed. Please comment here and ask me to reopen if not. Thanks!
Thank you, it works now. Can you maybe shortly elaborate what the reason was? In case I encounter sth similiar in my personal projects.
Unfortunately, as much as I would like to I can not pinpoint this to a specific commit. (but I have used this feature locally for a while and it seemed to be fixed)
I have developed a framework (https://git.fortiss.org/opensbt/opensbt-core) based on pymoo for applying heuristic algorithms for critical test case generation in the automated driving domain.
In this domain, an individual is basically a driving scenario (velocity of the car, position of the car etc...) and evaluation of the individual requires simulating the driving system for that scenario, yielding simulation traces. The problem is, I need to store that traces in the individual. As I understand, the current individual implementation does not allow me to subclass it to define a domain specific individual. For that reason, I am using monkey patching do replace the basic individual by my specific one:
https://git.fortiss.org/opensbt/opensbt-core/-/blob/main/model_ga/individual.py?ref_type=heads
https://git.fortiss.org/opensbt/opensbt-core/-/blob/main/run.py?ref_type=heads#L3-4
Is this there any other workaround I have missed?
Thank you.