jansel / opentuner

An extensible framework for program autotuning
http://opentuner.org/
MIT License
382 stars 112 forks source link

How to make OpenTuner run all permutations #126

Open binhnguyennus opened 5 years ago

binhnguyennus commented 5 years ago

Hi @jansel and @jbosboom,

Thank you so much for your useful framework! I am really enjoy using it for my research. Currently, I am having a problem which I hope you will spend your precious time to take a look.

I have a list of elements:

basket = [0, 1, 2, 3,...99]

and a foo function:

foo(basket) = bar

(foo just does some graph-related calculations and returns the integer bar)

My goal is getting the minimum value of bar with the corresponding permutation of basket . So what I did was:

In manipulator, I used PermutationParameter (like the tsp example):

    def manipulator(self):

        manipulator = ConfigurationManipulator()

        manipulator.add_parameter(PermutationParameter(0, basket))  # 0 here means cfg[0]

        return manipulator

And in run, as I wanted to return the permutation that produces the minimum bar, so:

    def run(self, desired_result, input, limit):
        cfg = desired_result.configuration.data

        permutation = cfg[0]

        bar = foo(permutation)

        return opentuner.resultsdb.models.Result(time=0.0, accuracy=-bar)

I used the objective MaximizeAccuracy, and expected to get the minimum value of bar, since accuracy=-bar (I did the same way with the example unitary, in which it ranked fidelities by passing them to the accuracy)

    def objective(self):
        return opentuner.search.objective.MaximizeAccuracy()

And the problem is:

The generated permutations are correct. However, the final output (i.e, the expected minimum bar) is not correct. I suspect this could be because OpenTuner termiated before trying all the permutations.

So, If I want to make OpenTuner run all the possible permutations, which part in the code I should modify? I guess it should be something about termination condition or search space but I still haven't found it yet.

Thank you very much!

Best regards, Binh.

jansel commented 5 years ago

It is impossible to try all permutations in a human lifetime except for short lists because the number of permutations grows exponentially. For [0, 1, 2, 3,...99] there are 100! ~= 9.3*10^157 possible permutations (there are 10^80 atoms in the universe, so you cant try all 10^157).

You could try different techniques which handle permutations better. For example, particle swarm optimization in OpenTuner compute a per-element momentum and often does better on permutations where absolute positions or specific key elements matter. Some of the genetic techniques do mutation and crossover on the permutations and handle them well when subsequences matter.

binhnguyennus commented 5 years ago

Hi @jansel ,

Thank you very much for your clarification!

Regarding my attempt to try all the permutations (exhausive search), I understand that it will take more than my life time to finish. My intention was, I would let the tuner run for several days and hopefully it would return the optimal solution. If not, I could terminate the tuner and state in my paper "The Exhausive Search did not return a optimal solution within 72 hours, blah blah...", something like that.

The key thing is, I am not sure why OpenTuner did not return my expected optimal result. I optimized my foo(basket) example using some well-known graph-based techniques like Kernighan–Lin. The KL algorithm can return the correct optimal solution bar with in one hour. While OpenTuner doesn't return the optimal result after taking much more time to run. Although I have tried all the options (Normal Greedy Mutation, PSO, Global GA, etc), the results are still not improved. I suspect this could be because of two reasons:

  1. OpenTuner uses probabilistic swapping (default p = 0.25). Hence, it can miss the optimal result.
  2. The convergence criteria (_driver.py/covergencecriteria) is based on a predefined number of iterations (_testlimit). It can also lead to the missing of optimal result.

How do you think about this? I look forward to hearing from you soon. Thanks in advance!

Best regards, Binh.

jansel commented 5 years ago

Perhaps you are looking for:

  --test-limit TEST_LIMIT
                        stop tuning after given tests count
  --stop-after STOP_AFTER
                        stop tuning after given seconds

Adding --stop-after=259200 would tune for 3 days then stop.

I would strongly recommend against mentioning that exhaustive search did not work for a search space of size 10^157. If your reviewers are on top of things that should trigger an automatic reject as exhaustive search is an insane technique to use for such a large search space. It would result in searching in one tiny corner of the search space (approximately 0% of the total area) and ignoring the rest. It is nonsense. Even a purely random search would be far better.

Feel free to submit a PR to add the mentioned graph-based techniques to OpenTuner that seems like a good addition.

binhnguyennus commented 5 years ago

Hi @jansel ,

That's cool to receive your prompt reply, especially on Saturday night :)

Exhaustive search (then terminating) is a naive thing for sure! I thought it can be used a good base-line, but now I have to figure out an alternative solution.

I saw that all the optimization techniques implemented in OpenTuner are heuristic or meta-heuristic. Have you implemented any Machine Learning-based optimization in OpenTuner so far?

Have a nice weekend! Binh.

jansel commented 5 years ago

Some others were exploring that, but I am not sure how it went.