FlashRepo / Flash-SingleConfig

Single objective performance optimization for highly configurable systems
0 stars 1 forks source link

Another new technique! Why? #1

Open vivekaxl opened 7 years ago

vivekaxl commented 7 years ago

In our previous paper, we used a rank based progressive sampling technique to optimize for software systems. However progressive sampling uses a validation set to perform iterative sampling and that adds additional cost to the sampling process. Caution: none of the previous papers including our paper does not talk about the validation set.

This new technique is does not use any validation technique and hence requires far fewer evaluations hence lowering the cost of optimization by a factor of [2.28-46.45].

This technique is inspired by both active learning and rank based sampling. The process starts by

  1. evaluating an initial set of samples (30 in our experiments).
  2. A model (CART) is build using these configurations. This model is used to predict performance scores for the rest of the samples (|dataset|-30).
  3. The best configuration (based on predicted performance score) is chosen, evaluated and added to the training set. Now, the training set has 31 samples where has the testing set has (|dataset|-31) samples.
  4. Continue till the stopping criterion is reached. In our case, the stopping criterion is: if the newly added point is not better (lower) than the best score of the training set then lose life. The number of lives used in the experiment is 10.
timm commented 7 years ago

i think i can speed up part 3.

take a random config. pass it down the branch that goes to the best leaf. mutate that instance according to the nodes in that branch. evaluate that "best" mutant.

note that in your approach, step3 requires nearly 3.9 miilion evals for sql at each step. in mine, it requires only one

timm commented 7 years ago

you need a name for your new method

NAIR: rating acquired incremental reasons