Project-Platypus / Platypus

A Free and Open Source Python Library for Multiobjective Optimization
GNU General Public License v3.0
563 stars 153 forks source link

Track of evaluations #165

Closed javiervinuales closed 1 year ago

javiervinuales commented 3 years ago

Hello,

I am using Platypus for optimizing designs using fluid dynamics (CFD). That means that my evaluation function needs to create a folder and run a CFD simulation, from which the objective/constraints can be extracted. Each evaluation is stored in a numeric folder (i=1,2,3...). I am interested in keeping track of how the optimization population (algorithm.result) relates to the original evaluation (i=1,2,3...), as the simulation contains a lot of useful data I need to use. How can I do that?

Thanks!

jetuk commented 3 years ago

Two options I can think of:

  1. Use your output folders for the results. I.e. ignore the algorithm.result data. At the end of the simulation read the objectives and constraints you have saved separately (I've done this before with an external database), then perform your own non-dominated sort of the feasible solutions. You then know exactly which solution is which and have access to all the corresponding meta-data.

  2. Subclass Problem and implement your own evaluate method. This method calls your CFD model, but also assign an unique attribute (e.g. .id) to the solution that corresponds to your run folder name/UUID. Hopefully that attribute would then be available on each of the solutions in algorithm.result at the end of the simulation.

github-actions[bot] commented 1 year ago

This issue is stale and will be closed soon. If you feel this issue is still relevant, please comment to keep it active. Please also consider working on a fix and submitting a PR.

debpal commented 2 weeks ago

See the answer of issue #210