Open ndangtt opened 1 year ago
2. Implementation: We can re-use the code of `target_irace`, but we should pass the following information in meta-irace's scenario instance list: * <"Path to surrogate model (.bin file)"> <"tuning_instance_set"> <"evaluation_instance_set"> runtime/quality
Just implemented this with 132c499 but in a much better way. Instead of passing strings of file names, we pass the model objects and the lists directly thanks to auto-optimization/iracepy#30.
Once a meta-irace tuning is finished, we want to evaluate the tuned irace-configurations and see if results are better than default-irace. We will call that process "meta-irace evaluation". This thread is for us to discuss how to set that up.
train/validation/test
. During meta-irace tuning, we usetrain
for each irace tuning, andvalidation
for evaluating the best solver configuration returned by irace. During meta-irace evaluation, we can usetrain
for each irace tuning again, andtest
for evaluating each best solver configuration returned.We should discuss with Thomas and Manuel if we need a separate
train
set for meta-irace evaluation. For now let's us use the originaltrain
set (used by meta-irace tuning)target_irace
, but we should pass the following information in meta-irace's scenario instance list:Instead of passing "instances", we can pass: "trainInstances" and "testInstances". Need to make sure the evaluation data is saved into
.Rdata
.TODO: