Closed andreashandel closed 8 years ago
You can find a technical report that describes irace at http://iridia.ulb.ac.be/IridiaTrSeries/link/IridiaTr2011-004.pdf.
Lars can you at one sentence and this link to the tutorial? I will do the same then in tuneParams.
Andreas does this answer your question?
Done.
Thanks! I had looked at the linked document before, but I I just realized I was making a conceptual/thinking mistake. I was for some reason assuming that irace -> some optimizer -> tuning, i.e. that irace was another level that controlled parameters of single or multiple competing optimizers, which in turn worked on the tuning (a race between optimizers performing the tuning). But I guess it's much simpler, irace itself does the tuning by sampling the parameters of the ML algorithm?
Yes, that is correct. You can think of irace as being an optimiser itself.
. But I guess it's much simpler, irace itself does the tuning by sampling the parameters of the ML algorithm?
Well, to give a slightly longer answer than Lars: Irace is an algorithm configurator. It is a method that optimizes the parameters of an algorithm over an instance space (so irace just solves an optimization problem, and is an optimizer). What is a bit confusing is that one (popular) area of application for irace is optimization algorithms. They also mention this in the paper and give examples. But in mlr our algorithms, whose params we optimize, are of course ML learning algorithms.
I hope that clears this up....
I linked the irace TR in the TuneControl docs now.
Will close here. Open again if questions remain
I'm playing around with model tuning and the iterated F-racing approach is nice since it is very flexible with regard to the types of parameters one can tune. I looked at the mlr and irace documentation, and I think I roughly understand what it is doing. What I haven't been able to learn is what optimizers are being used when I run irace within mlr. Any pointers where I can find information on that? Thanks.