Must talk to @vivekaxl and @Ginfung about this. If @ginfung is walking on combining constrained and inductively learned trees, maybe the following is how @vivekaxl would handle surrogate generation in black box optimization (no specific constraint knowledge).
But must ask @vivekaxl to ensure we get the "gale is ok" paper first.
N = 256 # (say) so sqrt(N) = 16
def training(pop):
"Only evals sqrt(N) examples"
clusters = where(pos)
leafs = leaves(clusters) # clusters with no subs
return [model.eval(any(cluster.row)) for leaf in leafs]
def trees(pop):
pop= [model.anyDecisions for _ in xrange(N)]
# note the following only evals sqrt(N) examples
sample = training(pop)
cart1 = decision tree for objective1 learned from sample
cart2 = decision tree for objective2 learned from sample
cartX = decision tree for objectiveX learned from sample
return [cart1,cart2,cartX]
Now run DE as normal on "pop" but instead of evaluating w.r.t. model, eval w.r.t. tree.
Every so often, re-run trees. e.g. after each generation of DE or after 25% of the frontier has been replaced or if the error rate of the Cart predictions grows in a nasty way
Must talk to @vivekaxl and @Ginfung about this. If @ginfung is walking on combining constrained and inductively learned trees, maybe the following is how @vivekaxl would handle surrogate generation in black box optimization (no specific constraint knowledge).
But must ask @vivekaxl to ensure we get the "gale is ok" paper first.
Now run DE as normal on "pop" but instead of evaluating w.r.t. model, eval w.r.t. tree.
Every so often, re-run trees. e.g. after each generation of DE or after 25% of the frontier has been replaced or if the error rate of the Cart predictions grows in a nasty way