Open colbrydi opened 4 years ago
Actually, we were reviewing the code today and realized that there is a 10% overlap which I thought was fixed. This is loosing 10% of the code runtime.
Also the mutation and crossover is not happening as expected and needs to be cleaned up. This is going up on the bug list.
As the code is written there is a non-zero chance that the same algorithm will be evaluated multiple times. We should try to measure the frequency this repeat work and decide if we need to eliminate it. I have two basic ideas:
1) Make a parameter-space hash table (dictionary) and store everything we have tested. If there is a repeat, just return the fitness value. Pro - this should be easy to implement. Con - is that this may take up a lot of memory?
2) Calculate a unique number for each algorithm (Officially the search space is finite so this should be possible). Then use that value in a lookup table. Pro - This should take up much less memory. Con - the calculation, although conceptually trivial, may be hard to get right and may add a non-trivial time to calculations.
I probably would start with 1 and then, if we determine that repeats are happening often we may need to do a timing study for 2.