Currently you can access them from evaluations.log or with gamaobject._evaluation_library.n_best(n, with_pipelines). The first is a little cumbersome if you still have access to the gama object, the second is unintuitive when you use BestFitProcessing as you then have to set with_pipelines=False explicitly. It would be great to have:
[ ] something like a gamaobject.leaderboard property which returns a dataframe with results of each evaluation
[ ] modify n_best behavior to return the best evaluation objects that are relevant to the initial configuration (if no pipelines are ever meant to be stored, the with_piplines=True default makes no sense)
Currently you can access them from
evaluations.log
or withgamaobject._evaluation_library.n_best(n, with_pipelines)
. The first is a little cumbersome if you still have access to the gama object, the second is unintuitive when you use BestFitProcessing as you then have to setwith_pipelines=False
explicitly. It would be great to have:gamaobject.leaderboard
property which returns a dataframe with results of each evaluationwith_piplines=True
default makes no sense)