MLBazaar / BTB

A simple, extensible library for developing AutoML systems
https://mlbazaar.github.io/BTB/
MIT License
172 stars 41 forks source link

Implement a Benchmark for BTB #139

Closed pvk-developer closed 4 years ago

pvk-developer commented 5 years ago

As part of #131 with the new structure we would like to achieve benchmark.

In order to implement a benchmark method, the following implementations are proposed:

An example of a challenge would be as follows:

class Rosenbrock(Challenge):
    def __init__(self, a=1, b=1):
        self.a = a
        self.b = b

    @classmethod
    def get_tunable(cls):
        x = IntHyperParam(min=-50, max=50)
        y = IntHyperParam(min=-50, max=50)
        return Tunable({‘x’: x, ‘y’: y})

    def score(self, x, y):
        return -1 * ((self.a - x)**2 + self.b * (y - x**2)**2)

An example of a python function that returns the best score would be as follows:

def gp_tuner_function(scorer, tunable, iterations):
    tuner = GPTuner(tunable)
    best_score = -np.inf

    for _ in range(iterations):
        proposal = tuner.propose()
        score = scorer(**proposal)
        tuner.record(proposal, score)
        best_score = max(score, best_score)

    return best_score

Our benchmark function can be:

def benchmark(tuner_function, challenges=Rosenbrock, iterations=1000):
    if not isinstance(challenges, list):
        chalenges = [chalenges]

    results = list()
    for challenge_class in challenges:
        challenge = challenge_class()
        tunable = challenge.get_tunable()
        score = tuner_function(challenge.score, tunable, iterations)
        results.append({
            'score': score
            'iterations': iterations,
            'challenge': challenge_class.__name__,
        })

    return pd.DataFrame(results)