SepShr / MLCSHE

This repo houses the ML-Component Systemic Hazard Envelope project, or MILSCHE (pronounced /'mɪlʃ/).
MIT License
3 stars 0 forks source link

Reduce the total number of simulator calls #27

Open SepShr opened 2 years ago

SepShr commented 2 years ago

Currently the algorithm evaluates every possible combination of complete solutions. The main problem with the current version of the update_archive() function. Since the simulation required for joint_fitness_evaluation is highly resource-intensive, the overall number of calls to the simulation, i.e., joint fitness evaluations must be reduced.

donghwan-shin commented 2 years ago

If you want to get some ideas of Surrogate-Assisted Evolutionary Algorithms (SAEAs), you can take a look at this paper.

Meanwhile, I will also develop some ideas for addressing this issue and make a reply here when ready to discuss.

donghwan-shin commented 2 years ago

Hi, let me quickly summarize what I have thought about this issue.

First, I think that the "power" of iCCEA is not only selecting an "informative" archive but also maintaining the archive for the next generations. Note that the members in the archive are copied directly into the new population without any modification. In this regard, even though iCCEA does not reduce the number of joint fitness evaluations, making an informative (yet small) archive from a population has a meaning.

Second, we need joint fitness evaluations in update_archive() simply because we want to calculate individual fitness scores based on the joint fitness scores. Therefore, if we can somehow select a set of "good" individuals as an archive without using the joint fitness scores, we don't need to worry about expensive joint fitness evaluation (at least in update_archieve). One way of doing this is to use the diversity of individuals by assuming that the more the archive members are diverse (in terms of their genotypes), the better. For example, we can select a set of individuals as an archive of a population such that the archive maintains the level of the diversity of the population. I think this could be an interesting workshop paper at least.

Third, according to the survey paper (Ma et al., 2019), for the collaborator selection of a single complete solution, random collaborator selection does well in maintaining the diversity of a subpopulation and preventing premature convergence [96], [120], [156], [188]. It outperforms the single best collaborator selection in dynamic optimization with a fast-changing environment, but its randomness can also slow down the local convergence of CCEAs [8]. Based on this, we can simply select a set of random individuals as an archive. Or, if we do not want to follow an archive-based approach anymore, then we can simply do the single random collaborator selection.