DE0CH / irace-tuning

0 stars 1 forks source link

Evaluation for meta-irace #4

Open ndangtt opened 1 year ago

ndangtt commented 1 year ago

Once a meta-irace tuning is finished, we want to evaluate the tuned irace-configurations and see if results are better than default-irace. We will call that process "meta-irace evaluation". This thread is for us to discuss how to set that up.

  1. Instance sets: we currently split the data into 3 sets: train/validation/test. During meta-irace tuning, we use train for each irace tuning, and validation for evaluating the best solver configuration returned by irace. During meta-irace evaluation, we can use train for each irace tuning again, and test for evaluating each best solver configuration returned.

We should discuss with Thomas and Manuel if we need a separate train set for meta-irace evaluation. For now let's us use the original train set (used by meta-irace tuning)

  1. Implementation: We can re-use the code of target_irace, but we should pass the following information in meta-irace's scenario instance list:
  1. Configurations we want to evaluate:

Instead of passing "instances", we can pass: "trainInstances" and "testInstances". Need to make sure the evaluation data is saved into .Rdata.

TODO:

DE0CH commented 1 year ago
2. Implementation: We can re-use the code of `target_irace`, but we should pass the following information in meta-irace's scenario instance list:

* <"Path to surrogate model (.bin file)"> <"tuning_instance_set"> <"evaluation_instance_set"> runtime/quality

Just implemented this with 132c499 but in a much better way. Instead of passing strings of file names, we pass the model objects and the lists directly thanks to auto-optimization/iracepy#30.